Dec  3 12:08:42 np0005544501 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  3 12:08:42 np0005544501 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  3 12:08:42 np0005544501 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  3 12:08:42 np0005544501 kernel: BIOS-provided physical RAM map:
Dec  3 12:08:42 np0005544501 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  3 12:08:42 np0005544501 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  3 12:08:42 np0005544501 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  3 12:08:42 np0005544501 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  3 12:08:42 np0005544501 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  3 12:08:42 np0005544501 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  3 12:08:42 np0005544501 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  3 12:08:42 np0005544501 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  3 12:08:42 np0005544501 kernel: NX (Execute Disable) protection: active
Dec  3 12:08:42 np0005544501 kernel: APIC: Static calls initialized
Dec  3 12:08:42 np0005544501 kernel: SMBIOS 2.8 present.
Dec  3 12:08:42 np0005544501 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  3 12:08:42 np0005544501 kernel: Hypervisor detected: KVM
Dec  3 12:08:42 np0005544501 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  3 12:08:42 np0005544501 kernel: kvm-clock: using sched offset of 3462788850 cycles
Dec  3 12:08:42 np0005544501 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  3 12:08:42 np0005544501 kernel: tsc: Detected 2799.998 MHz processor
Dec  3 12:08:42 np0005544501 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  3 12:08:42 np0005544501 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  3 12:08:42 np0005544501 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  3 12:08:42 np0005544501 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  3 12:08:42 np0005544501 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  3 12:08:42 np0005544501 kernel: Using GB pages for direct mapping
Dec  3 12:08:42 np0005544501 kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec  3 12:08:42 np0005544501 kernel: ACPI: Early table checksum verification disabled
Dec  3 12:08:42 np0005544501 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  3 12:08:42 np0005544501 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  3 12:08:42 np0005544501 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  3 12:08:42 np0005544501 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  3 12:08:42 np0005544501 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  3 12:08:42 np0005544501 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  3 12:08:42 np0005544501 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  3 12:08:42 np0005544501 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  3 12:08:42 np0005544501 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  3 12:08:42 np0005544501 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  3 12:08:42 np0005544501 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  3 12:08:42 np0005544501 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  3 12:08:42 np0005544501 kernel: No NUMA configuration found
Dec  3 12:08:42 np0005544501 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  3 12:08:42 np0005544501 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Dec  3 12:08:42 np0005544501 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  3 12:08:42 np0005544501 kernel: Zone ranges:
Dec  3 12:08:42 np0005544501 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  3 12:08:42 np0005544501 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  3 12:08:42 np0005544501 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  3 12:08:42 np0005544501 kernel:  Device   empty
Dec  3 12:08:42 np0005544501 kernel: Movable zone start for each node
Dec  3 12:08:42 np0005544501 kernel: Early memory node ranges
Dec  3 12:08:42 np0005544501 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  3 12:08:42 np0005544501 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  3 12:08:42 np0005544501 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  3 12:08:42 np0005544501 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  3 12:08:42 np0005544501 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  3 12:08:42 np0005544501 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  3 12:08:42 np0005544501 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  3 12:08:42 np0005544501 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  3 12:08:42 np0005544501 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  3 12:08:42 np0005544501 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  3 12:08:42 np0005544501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  3 12:08:42 np0005544501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  3 12:08:42 np0005544501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  3 12:08:42 np0005544501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  3 12:08:42 np0005544501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  3 12:08:42 np0005544501 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  3 12:08:42 np0005544501 kernel: TSC deadline timer available
Dec  3 12:08:42 np0005544501 kernel: CPU topo: Max. logical packages:   8
Dec  3 12:08:42 np0005544501 kernel: CPU topo: Max. logical dies:       8
Dec  3 12:08:42 np0005544501 kernel: CPU topo: Max. dies per package:   1
Dec  3 12:08:42 np0005544501 kernel: CPU topo: Max. threads per core:   1
Dec  3 12:08:42 np0005544501 kernel: CPU topo: Num. cores per package:     1
Dec  3 12:08:42 np0005544501 kernel: CPU topo: Num. threads per package:   1
Dec  3 12:08:42 np0005544501 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  3 12:08:42 np0005544501 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  3 12:08:42 np0005544501 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  3 12:08:42 np0005544501 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  3 12:08:42 np0005544501 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  3 12:08:42 np0005544501 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  3 12:08:42 np0005544501 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  3 12:08:42 np0005544501 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  3 12:08:42 np0005544501 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  3 12:08:42 np0005544501 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  3 12:08:42 np0005544501 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  3 12:08:42 np0005544501 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  3 12:08:42 np0005544501 kernel: Booting paravirtualized kernel on KVM
Dec  3 12:08:42 np0005544501 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  3 12:08:42 np0005544501 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  3 12:08:42 np0005544501 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  3 12:08:42 np0005544501 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  3 12:08:42 np0005544501 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  3 12:08:42 np0005544501 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  3 12:08:42 np0005544501 kernel: random: crng init done
Dec  3 12:08:42 np0005544501 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: Fallback order for Node 0: 0 
Dec  3 12:08:42 np0005544501 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  3 12:08:42 np0005544501 kernel: Policy zone: Normal
Dec  3 12:08:42 np0005544501 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  3 12:08:42 np0005544501 kernel: software IO TLB: area num 8.
Dec  3 12:08:42 np0005544501 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  3 12:08:42 np0005544501 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  3 12:08:42 np0005544501 kernel: ftrace: allocated 193 pages with 3 groups
Dec  3 12:08:42 np0005544501 kernel: Dynamic Preempt: voluntary
Dec  3 12:08:42 np0005544501 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  3 12:08:42 np0005544501 kernel: rcu: #011RCU event tracing is enabled.
Dec  3 12:08:42 np0005544501 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  3 12:08:42 np0005544501 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  3 12:08:42 np0005544501 kernel: #011Rude variant of Tasks RCU enabled.
Dec  3 12:08:42 np0005544501 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  3 12:08:42 np0005544501 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  3 12:08:42 np0005544501 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  3 12:08:42 np0005544501 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  3 12:08:42 np0005544501 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  3 12:08:42 np0005544501 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  3 12:08:42 np0005544501 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  3 12:08:42 np0005544501 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  3 12:08:42 np0005544501 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  3 12:08:42 np0005544501 kernel: Console: colour VGA+ 80x25
Dec  3 12:08:42 np0005544501 kernel: printk: console [ttyS0] enabled
Dec  3 12:08:42 np0005544501 kernel: ACPI: Core revision 20230331
Dec  3 12:08:42 np0005544501 kernel: APIC: Switch to symmetric I/O mode setup
Dec  3 12:08:42 np0005544501 kernel: x2apic enabled
Dec  3 12:08:42 np0005544501 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  3 12:08:42 np0005544501 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  3 12:08:42 np0005544501 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec  3 12:08:42 np0005544501 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  3 12:08:42 np0005544501 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  3 12:08:42 np0005544501 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  3 12:08:42 np0005544501 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  3 12:08:42 np0005544501 kernel: Spectre V2 : Mitigation: Retpolines
Dec  3 12:08:42 np0005544501 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  3 12:08:42 np0005544501 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  3 12:08:42 np0005544501 kernel: RETBleed: Mitigation: untrained return thunk
Dec  3 12:08:42 np0005544501 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  3 12:08:42 np0005544501 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  3 12:08:42 np0005544501 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  3 12:08:42 np0005544501 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  3 12:08:42 np0005544501 kernel: x86/bugs: return thunk changed
Dec  3 12:08:42 np0005544501 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  3 12:08:42 np0005544501 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  3 12:08:42 np0005544501 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  3 12:08:42 np0005544501 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  3 12:08:42 np0005544501 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  3 12:08:42 np0005544501 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  3 12:08:42 np0005544501 kernel: Freeing SMP alternatives memory: 40K
Dec  3 12:08:42 np0005544501 kernel: pid_max: default: 32768 minimum: 301
Dec  3 12:08:42 np0005544501 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  3 12:08:42 np0005544501 kernel: landlock: Up and running.
Dec  3 12:08:42 np0005544501 kernel: Yama: becoming mindful.
Dec  3 12:08:42 np0005544501 kernel: SELinux:  Initializing.
Dec  3 12:08:42 np0005544501 kernel: LSM support for eBPF active
Dec  3 12:08:42 np0005544501 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  3 12:08:42 np0005544501 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  3 12:08:42 np0005544501 kernel: ... version:                0
Dec  3 12:08:42 np0005544501 kernel: ... bit width:              48
Dec  3 12:08:42 np0005544501 kernel: ... generic registers:      6
Dec  3 12:08:42 np0005544501 kernel: ... value mask:             0000ffffffffffff
Dec  3 12:08:42 np0005544501 kernel: ... max period:             00007fffffffffff
Dec  3 12:08:42 np0005544501 kernel: ... fixed-purpose events:   0
Dec  3 12:08:42 np0005544501 kernel: ... event mask:             000000000000003f
Dec  3 12:08:42 np0005544501 kernel: signal: max sigframe size: 1776
Dec  3 12:08:42 np0005544501 kernel: rcu: Hierarchical SRCU implementation.
Dec  3 12:08:42 np0005544501 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  3 12:08:42 np0005544501 kernel: smp: Bringing up secondary CPUs ...
Dec  3 12:08:42 np0005544501 kernel: smpboot: x86: Booting SMP configuration:
Dec  3 12:08:42 np0005544501 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  3 12:08:42 np0005544501 kernel: smp: Brought up 1 node, 8 CPUs
Dec  3 12:08:42 np0005544501 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec  3 12:08:42 np0005544501 kernel: node 0 deferred pages initialised in 8ms
Dec  3 12:08:42 np0005544501 kernel: Memory: 7763932K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618212K reserved, 0K cma-reserved)
Dec  3 12:08:42 np0005544501 kernel: devtmpfs: initialized
Dec  3 12:08:42 np0005544501 kernel: x86/mm: Memory block size: 128MB
Dec  3 12:08:42 np0005544501 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  3 12:08:42 np0005544501 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  3 12:08:42 np0005544501 kernel: pinctrl core: initialized pinctrl subsystem
Dec  3 12:08:42 np0005544501 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  3 12:08:42 np0005544501 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  3 12:08:42 np0005544501 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  3 12:08:42 np0005544501 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  3 12:08:42 np0005544501 kernel: audit: initializing netlink subsys (disabled)
Dec  3 12:08:42 np0005544501 kernel: audit: type=2000 audit(1764781720.269:1): state=initialized audit_enabled=0 res=1
Dec  3 12:08:42 np0005544501 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  3 12:08:42 np0005544501 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  3 12:08:42 np0005544501 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  3 12:08:42 np0005544501 kernel: cpuidle: using governor menu
Dec  3 12:08:42 np0005544501 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  3 12:08:42 np0005544501 kernel: PCI: Using configuration type 1 for base access
Dec  3 12:08:42 np0005544501 kernel: PCI: Using configuration type 1 for extended access
Dec  3 12:08:42 np0005544501 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  3 12:08:42 np0005544501 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  3 12:08:42 np0005544501 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  3 12:08:42 np0005544501 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  3 12:08:42 np0005544501 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  3 12:08:42 np0005544501 kernel: Demotion targets for Node 0: null
Dec  3 12:08:42 np0005544501 kernel: cryptd: max_cpu_qlen set to 1000
Dec  3 12:08:42 np0005544501 kernel: ACPI: Added _OSI(Module Device)
Dec  3 12:08:42 np0005544501 kernel: ACPI: Added _OSI(Processor Device)
Dec  3 12:08:42 np0005544501 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  3 12:08:42 np0005544501 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  3 12:08:42 np0005544501 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  3 12:08:42 np0005544501 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  3 12:08:42 np0005544501 kernel: ACPI: Interpreter enabled
Dec  3 12:08:42 np0005544501 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  3 12:08:42 np0005544501 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  3 12:08:42 np0005544501 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  3 12:08:42 np0005544501 kernel: PCI: Using E820 reservations for host bridge windows
Dec  3 12:08:42 np0005544501 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  3 12:08:42 np0005544501 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  3 12:08:42 np0005544501 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [3] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [4] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [5] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [6] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [7] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [8] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [9] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [10] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [11] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [12] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [13] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [14] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [15] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [16] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [17] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [18] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [19] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [20] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [21] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [22] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [23] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [24] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [25] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [26] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [27] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [28] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [29] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [30] registered
Dec  3 12:08:42 np0005544501 kernel: acpiphp: Slot [31] registered
Dec  3 12:08:42 np0005544501 kernel: PCI host bridge to bus 0000:00
Dec  3 12:08:42 np0005544501 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  3 12:08:42 np0005544501 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  3 12:08:42 np0005544501 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  3 12:08:42 np0005544501 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  3 12:08:42 np0005544501 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  3 12:08:42 np0005544501 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  3 12:08:42 np0005544501 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  3 12:08:42 np0005544501 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  3 12:08:42 np0005544501 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  3 12:08:42 np0005544501 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  3 12:08:42 np0005544501 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  3 12:08:42 np0005544501 kernel: iommu: Default domain type: Translated
Dec  3 12:08:42 np0005544501 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  3 12:08:42 np0005544501 kernel: SCSI subsystem initialized
Dec  3 12:08:42 np0005544501 kernel: ACPI: bus type USB registered
Dec  3 12:08:42 np0005544501 kernel: usbcore: registered new interface driver usbfs
Dec  3 12:08:42 np0005544501 kernel: usbcore: registered new interface driver hub
Dec  3 12:08:42 np0005544501 kernel: usbcore: registered new device driver usb
Dec  3 12:08:42 np0005544501 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  3 12:08:42 np0005544501 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  3 12:08:42 np0005544501 kernel: PTP clock support registered
Dec  3 12:08:42 np0005544501 kernel: EDAC MC: Ver: 3.0.0
Dec  3 12:08:42 np0005544501 kernel: NetLabel: Initializing
Dec  3 12:08:42 np0005544501 kernel: NetLabel:  domain hash size = 128
Dec  3 12:08:42 np0005544501 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  3 12:08:42 np0005544501 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  3 12:08:42 np0005544501 kernel: PCI: Using ACPI for IRQ routing
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  3 12:08:42 np0005544501 kernel: vgaarb: loaded
Dec  3 12:08:42 np0005544501 kernel: clocksource: Switched to clocksource kvm-clock
Dec  3 12:08:42 np0005544501 kernel: VFS: Disk quotas dquot_6.6.0
Dec  3 12:08:42 np0005544501 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  3 12:08:42 np0005544501 kernel: pnp: PnP ACPI init
Dec  3 12:08:42 np0005544501 kernel: pnp: PnP ACPI: found 5 devices
Dec  3 12:08:42 np0005544501 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  3 12:08:42 np0005544501 kernel: NET: Registered PF_INET protocol family
Dec  3 12:08:42 np0005544501 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  3 12:08:42 np0005544501 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  3 12:08:42 np0005544501 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  3 12:08:42 np0005544501 kernel: NET: Registered PF_XDP protocol family
Dec  3 12:08:42 np0005544501 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  3 12:08:42 np0005544501 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  3 12:08:42 np0005544501 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  3 12:08:42 np0005544501 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  3 12:08:42 np0005544501 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  3 12:08:42 np0005544501 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  3 12:08:42 np0005544501 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 75306 usecs
Dec  3 12:08:42 np0005544501 kernel: PCI: CLS 0 bytes, default 64
Dec  3 12:08:42 np0005544501 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  3 12:08:42 np0005544501 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  3 12:08:42 np0005544501 kernel: Trying to unpack rootfs image as initramfs...
Dec  3 12:08:42 np0005544501 kernel: ACPI: bus type thunderbolt registered
Dec  3 12:08:42 np0005544501 kernel: Initialise system trusted keyrings
Dec  3 12:08:42 np0005544501 kernel: Key type blacklist registered
Dec  3 12:08:42 np0005544501 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  3 12:08:42 np0005544501 kernel: zbud: loaded
Dec  3 12:08:42 np0005544501 kernel: integrity: Platform Keyring initialized
Dec  3 12:08:42 np0005544501 kernel: integrity: Machine keyring initialized
Dec  3 12:08:42 np0005544501 kernel: Freeing initrd memory: 87804K
Dec  3 12:08:42 np0005544501 kernel: NET: Registered PF_ALG protocol family
Dec  3 12:08:42 np0005544501 kernel: xor: automatically using best checksumming function   avx       
Dec  3 12:08:42 np0005544501 kernel: Key type asymmetric registered
Dec  3 12:08:42 np0005544501 kernel: Asymmetric key parser 'x509' registered
Dec  3 12:08:42 np0005544501 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  3 12:08:42 np0005544501 kernel: io scheduler mq-deadline registered
Dec  3 12:08:42 np0005544501 kernel: io scheduler kyber registered
Dec  3 12:08:42 np0005544501 kernel: io scheduler bfq registered
Dec  3 12:08:42 np0005544501 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  3 12:08:42 np0005544501 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  3 12:08:42 np0005544501 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  3 12:08:42 np0005544501 kernel: ACPI: button: Power Button [PWRF]
Dec  3 12:08:42 np0005544501 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  3 12:08:42 np0005544501 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  3 12:08:42 np0005544501 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  3 12:08:42 np0005544501 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  3 12:08:42 np0005544501 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  3 12:08:42 np0005544501 kernel: Non-volatile memory driver v1.3
Dec  3 12:08:42 np0005544501 kernel: rdac: device handler registered
Dec  3 12:08:42 np0005544501 kernel: hp_sw: device handler registered
Dec  3 12:08:42 np0005544501 kernel: emc: device handler registered
Dec  3 12:08:42 np0005544501 kernel: alua: device handler registered
Dec  3 12:08:42 np0005544501 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  3 12:08:42 np0005544501 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  3 12:08:42 np0005544501 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  3 12:08:42 np0005544501 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  3 12:08:42 np0005544501 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  3 12:08:42 np0005544501 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  3 12:08:42 np0005544501 kernel: usb usb1: Product: UHCI Host Controller
Dec  3 12:08:42 np0005544501 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  3 12:08:42 np0005544501 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  3 12:08:42 np0005544501 kernel: hub 1-0:1.0: USB hub found
Dec  3 12:08:42 np0005544501 kernel: hub 1-0:1.0: 2 ports detected
Dec  3 12:08:42 np0005544501 kernel: usbcore: registered new interface driver usbserial_generic
Dec  3 12:08:42 np0005544501 kernel: usbserial: USB Serial support registered for generic
Dec  3 12:08:42 np0005544501 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  3 12:08:42 np0005544501 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  3 12:08:42 np0005544501 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  3 12:08:42 np0005544501 kernel: mousedev: PS/2 mouse device common for all mice
Dec  3 12:08:42 np0005544501 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  3 12:08:42 np0005544501 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  3 12:08:42 np0005544501 kernel: rtc_cmos 00:04: registered as rtc0
Dec  3 12:08:42 np0005544501 kernel: rtc_cmos 00:04: setting system clock to 2025-12-03T17:08:41 UTC (1764781721)
Dec  3 12:08:42 np0005544501 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  3 12:08:42 np0005544501 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  3 12:08:42 np0005544501 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  3 12:08:42 np0005544501 kernel: usbcore: registered new interface driver usbhid
Dec  3 12:08:42 np0005544501 kernel: usbhid: USB HID core driver
Dec  3 12:08:42 np0005544501 kernel: drop_monitor: Initializing network drop monitor service
Dec  3 12:08:42 np0005544501 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  3 12:08:42 np0005544501 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  3 12:08:42 np0005544501 kernel: Initializing XFRM netlink socket
Dec  3 12:08:42 np0005544501 kernel: NET: Registered PF_INET6 protocol family
Dec  3 12:08:42 np0005544501 kernel: Segment Routing with IPv6
Dec  3 12:08:42 np0005544501 kernel: NET: Registered PF_PACKET protocol family
Dec  3 12:08:42 np0005544501 kernel: mpls_gso: MPLS GSO support
Dec  3 12:08:42 np0005544501 kernel: IPI shorthand broadcast: enabled
Dec  3 12:08:42 np0005544501 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  3 12:08:42 np0005544501 kernel: AES CTR mode by8 optimization enabled
Dec  3 12:08:42 np0005544501 kernel: sched_clock: Marking stable (1570032103, 146029895)->(1797011283, -80949285)
Dec  3 12:08:42 np0005544501 kernel: registered taskstats version 1
Dec  3 12:08:42 np0005544501 kernel: Loading compiled-in X.509 certificates
Dec  3 12:08:42 np0005544501 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  3 12:08:42 np0005544501 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  3 12:08:42 np0005544501 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  3 12:08:42 np0005544501 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  3 12:08:42 np0005544501 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  3 12:08:42 np0005544501 kernel: Demotion targets for Node 0: null
Dec  3 12:08:42 np0005544501 kernel: page_owner is disabled
Dec  3 12:08:42 np0005544501 kernel: Key type .fscrypt registered
Dec  3 12:08:42 np0005544501 kernel: Key type fscrypt-provisioning registered
Dec  3 12:08:42 np0005544501 kernel: Key type big_key registered
Dec  3 12:08:42 np0005544501 kernel: Key type encrypted registered
Dec  3 12:08:42 np0005544501 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  3 12:08:42 np0005544501 kernel: Loading compiled-in module X.509 certificates
Dec  3 12:08:42 np0005544501 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  3 12:08:42 np0005544501 kernel: ima: Allocated hash algorithm: sha256
Dec  3 12:08:42 np0005544501 kernel: ima: No architecture policies found
Dec  3 12:08:42 np0005544501 kernel: evm: Initialising EVM extended attributes:
Dec  3 12:08:42 np0005544501 kernel: evm: security.selinux
Dec  3 12:08:42 np0005544501 kernel: evm: security.SMACK64 (disabled)
Dec  3 12:08:42 np0005544501 kernel: evm: security.SMACK64EXEC (disabled)
Dec  3 12:08:42 np0005544501 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  3 12:08:42 np0005544501 kernel: evm: security.SMACK64MMAP (disabled)
Dec  3 12:08:42 np0005544501 kernel: evm: security.apparmor (disabled)
Dec  3 12:08:42 np0005544501 kernel: evm: security.ima
Dec  3 12:08:42 np0005544501 kernel: evm: security.capability
Dec  3 12:08:42 np0005544501 kernel: evm: HMAC attrs: 0x1
Dec  3 12:08:42 np0005544501 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  3 12:08:42 np0005544501 kernel: Running certificate verification RSA selftest
Dec  3 12:08:42 np0005544501 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  3 12:08:42 np0005544501 kernel: Running certificate verification ECDSA selftest
Dec  3 12:08:42 np0005544501 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  3 12:08:42 np0005544501 kernel: clk: Disabling unused clocks
Dec  3 12:08:42 np0005544501 kernel: Freeing unused decrypted memory: 2028K
Dec  3 12:08:42 np0005544501 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  3 12:08:42 np0005544501 kernel: Write protecting the kernel read-only data: 30720k
Dec  3 12:08:42 np0005544501 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  3 12:08:42 np0005544501 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  3 12:08:42 np0005544501 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  3 12:08:42 np0005544501 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  3 12:08:42 np0005544501 kernel: usb 1-1: Manufacturer: QEMU
Dec  3 12:08:42 np0005544501 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  3 12:08:42 np0005544501 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  3 12:08:42 np0005544501 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  3 12:08:42 np0005544501 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  3 12:08:42 np0005544501 kernel: Run /init as init process
Dec  3 12:08:42 np0005544501 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  3 12:08:42 np0005544501 systemd: Detected virtualization kvm.
Dec  3 12:08:42 np0005544501 systemd: Detected architecture x86-64.
Dec  3 12:08:42 np0005544501 systemd: Running in initrd.
Dec  3 12:08:42 np0005544501 systemd: No hostname configured, using default hostname.
Dec  3 12:08:42 np0005544501 systemd: Hostname set to <localhost>.
Dec  3 12:08:42 np0005544501 systemd: Initializing machine ID from VM UUID.
Dec  3 12:08:42 np0005544501 systemd: Queued start job for default target Initrd Default Target.
Dec  3 12:08:42 np0005544501 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  3 12:08:42 np0005544501 systemd: Reached target Local Encrypted Volumes.
Dec  3 12:08:42 np0005544501 systemd: Reached target Initrd /usr File System.
Dec  3 12:08:42 np0005544501 systemd: Reached target Local File Systems.
Dec  3 12:08:42 np0005544501 systemd: Reached target Path Units.
Dec  3 12:08:42 np0005544501 systemd: Reached target Slice Units.
Dec  3 12:08:42 np0005544501 systemd: Reached target Swaps.
Dec  3 12:08:42 np0005544501 systemd: Reached target Timer Units.
Dec  3 12:08:42 np0005544501 systemd: Listening on D-Bus System Message Bus Socket.
Dec  3 12:08:42 np0005544501 systemd: Listening on Journal Socket (/dev/log).
Dec  3 12:08:42 np0005544501 systemd: Listening on Journal Socket.
Dec  3 12:08:42 np0005544501 systemd: Listening on udev Control Socket.
Dec  3 12:08:42 np0005544501 systemd: Listening on udev Kernel Socket.
Dec  3 12:08:42 np0005544501 systemd: Reached target Socket Units.
Dec  3 12:08:42 np0005544501 systemd: Starting Create List of Static Device Nodes...
Dec  3 12:08:42 np0005544501 systemd: Starting Journal Service...
Dec  3 12:08:42 np0005544501 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  3 12:08:42 np0005544501 systemd: Starting Apply Kernel Variables...
Dec  3 12:08:42 np0005544501 systemd: Starting Create System Users...
Dec  3 12:08:42 np0005544501 systemd: Starting Setup Virtual Console...
Dec  3 12:08:42 np0005544501 systemd: Finished Create List of Static Device Nodes.
Dec  3 12:08:42 np0005544501 systemd: Finished Create System Users.
Dec  3 12:08:42 np0005544501 systemd-journald[309]: Journal started
Dec  3 12:08:42 np0005544501 systemd-journald[309]: Runtime Journal (/run/log/journal/3f123a89727d4ccfa960b8fd98f4d5b8) is 8.0M, max 153.6M, 145.6M free.
Dec  3 12:08:42 np0005544501 systemd-sysusers[313]: Creating group 'users' with GID 100.
Dec  3 12:08:42 np0005544501 systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Dec  3 12:08:42 np0005544501 systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  3 12:08:42 np0005544501 systemd: Started Journal Service.
Dec  3 12:08:42 np0005544501 systemd[1]: Finished Apply Kernel Variables.
Dec  3 12:08:42 np0005544501 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  3 12:08:42 np0005544501 systemd[1]: Starting Create Volatile Files and Directories...
Dec  3 12:08:42 np0005544501 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  3 12:08:42 np0005544501 systemd[1]: Finished Setup Virtual Console.
Dec  3 12:08:42 np0005544501 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  3 12:08:42 np0005544501 systemd[1]: Starting dracut cmdline hook...
Dec  3 12:08:42 np0005544501 systemd[1]: Finished Create Volatile Files and Directories.
Dec  3 12:08:42 np0005544501 dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Dec  3 12:08:42 np0005544501 dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  3 12:08:42 np0005544501 systemd[1]: Finished dracut cmdline hook.
Dec  3 12:08:42 np0005544501 systemd[1]: Starting dracut pre-udev hook...
Dec  3 12:08:42 np0005544501 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  3 12:08:42 np0005544501 kernel: device-mapper: uevent: version 1.0.3
Dec  3 12:08:42 np0005544501 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  3 12:08:42 np0005544501 kernel: RPC: Registered named UNIX socket transport module.
Dec  3 12:08:42 np0005544501 kernel: RPC: Registered udp transport module.
Dec  3 12:08:42 np0005544501 kernel: RPC: Registered tcp transport module.
Dec  3 12:08:42 np0005544501 kernel: RPC: Registered tcp-with-tls transport module.
Dec  3 12:08:42 np0005544501 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  3 12:08:42 np0005544501 rpc.statd[445]: Version 2.5.4 starting
Dec  3 12:08:42 np0005544501 rpc.statd[445]: Initializing NSM state
Dec  3 12:08:42 np0005544501 rpc.idmapd[450]: Setting log level to 0
Dec  3 12:08:43 np0005544501 systemd[1]: Finished dracut pre-udev hook.
Dec  3 12:08:43 np0005544501 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  3 12:08:43 np0005544501 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Dec  3 12:08:43 np0005544501 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  3 12:08:43 np0005544501 systemd[1]: Starting dracut pre-trigger hook...
Dec  3 12:08:43 np0005544501 systemd[1]: Finished dracut pre-trigger hook.
Dec  3 12:08:43 np0005544501 systemd[1]: Starting Coldplug All udev Devices...
Dec  3 12:08:43 np0005544501 systemd[1]: Created slice Slice /system/modprobe.
Dec  3 12:08:43 np0005544501 systemd[1]: Starting Load Kernel Module configfs...
Dec  3 12:08:43 np0005544501 systemd[1]: Finished Coldplug All udev Devices.
Dec  3 12:08:43 np0005544501 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  3 12:08:43 np0005544501 systemd[1]: Reached target Network.
Dec  3 12:08:43 np0005544501 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  3 12:08:43 np0005544501 systemd[1]: Starting dracut initqueue hook...
Dec  3 12:08:43 np0005544501 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  3 12:08:43 np0005544501 systemd[1]: Finished Load Kernel Module configfs.
Dec  3 12:08:43 np0005544501 systemd[1]: Mounting Kernel Configuration File System...
Dec  3 12:08:43 np0005544501 systemd[1]: Mounted Kernel Configuration File System.
Dec  3 12:08:43 np0005544501 systemd[1]: Reached target System Initialization.
Dec  3 12:08:43 np0005544501 systemd[1]: Reached target Basic System.
Dec  3 12:08:43 np0005544501 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  3 12:08:43 np0005544501 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  3 12:08:43 np0005544501 kernel: vda: vda1
Dec  3 12:08:43 np0005544501 kernel: scsi host0: ata_piix
Dec  3 12:08:43 np0005544501 kernel: scsi host1: ata_piix
Dec  3 12:08:43 np0005544501 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  3 12:08:43 np0005544501 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  3 12:08:43 np0005544501 systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  3 12:08:43 np0005544501 systemd[1]: Reached target Initrd Root Device.
Dec  3 12:08:43 np0005544501 kernel: ata1: found unknown device (class 0)
Dec  3 12:08:43 np0005544501 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  3 12:08:43 np0005544501 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  3 12:08:43 np0005544501 systemd-udevd[478]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 12:08:43 np0005544501 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  3 12:08:43 np0005544501 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  3 12:08:43 np0005544501 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  3 12:08:43 np0005544501 systemd[1]: Finished dracut initqueue hook.
Dec  3 12:08:43 np0005544501 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  3 12:08:43 np0005544501 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  3 12:08:43 np0005544501 systemd[1]: Reached target Remote File Systems.
Dec  3 12:08:43 np0005544501 systemd[1]: Starting dracut pre-mount hook...
Dec  3 12:08:43 np0005544501 systemd[1]: Finished dracut pre-mount hook.
Dec  3 12:08:43 np0005544501 systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec  3 12:08:43 np0005544501 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Dec  3 12:08:43 np0005544501 systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  3 12:08:43 np0005544501 systemd[1]: Mounting /sysroot...
Dec  3 12:08:44 np0005544501 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  3 12:08:44 np0005544501 kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec  3 12:08:44 np0005544501 kernel: XFS (vda1): Ending clean mount
Dec  3 12:08:44 np0005544501 systemd[1]: Mounted /sysroot.
Dec  3 12:08:44 np0005544501 systemd[1]: Reached target Initrd Root File System.
Dec  3 12:08:44 np0005544501 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  3 12:08:44 np0005544501 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  3 12:08:44 np0005544501 systemd[1]: Reached target Initrd File Systems.
Dec  3 12:08:44 np0005544501 systemd[1]: Reached target Initrd Default Target.
Dec  3 12:08:44 np0005544501 systemd[1]: Starting dracut mount hook...
Dec  3 12:08:44 np0005544501 systemd[1]: Finished dracut mount hook.
Dec  3 12:08:44 np0005544501 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  3 12:08:44 np0005544501 rpc.idmapd[450]: exiting on signal 15
Dec  3 12:08:44 np0005544501 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  3 12:08:44 np0005544501 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Network.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Timer Units.
Dec  3 12:08:44 np0005544501 systemd[1]: dbus.socket: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  3 12:08:44 np0005544501 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Initrd Default Target.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Basic System.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Initrd Root Device.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Initrd /usr File System.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Path Units.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Remote File Systems.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Slice Units.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Socket Units.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target System Initialization.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Local File Systems.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Swaps.
Dec  3 12:08:44 np0005544501 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped dracut mount hook.
Dec  3 12:08:44 np0005544501 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped dracut pre-mount hook.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  3 12:08:44 np0005544501 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  3 12:08:44 np0005544501 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped dracut initqueue hook.
Dec  3 12:08:44 np0005544501 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped Apply Kernel Variables.
Dec  3 12:08:44 np0005544501 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  3 12:08:44 np0005544501 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped Coldplug All udev Devices.
Dec  3 12:08:44 np0005544501 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped dracut pre-trigger hook.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  3 12:08:44 np0005544501 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped Setup Virtual Console.
Dec  3 12:08:44 np0005544501 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  3 12:08:44 np0005544501 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  3 12:08:44 np0005544501 systemd[1]: systemd-udevd.service: Consumed 1.023s CPU time.
Dec  3 12:08:44 np0005544501 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Closed udev Control Socket.
Dec  3 12:08:44 np0005544501 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Closed udev Kernel Socket.
Dec  3 12:08:44 np0005544501 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped dracut pre-udev hook.
Dec  3 12:08:44 np0005544501 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped dracut cmdline hook.
Dec  3 12:08:44 np0005544501 systemd[1]: Starting Cleanup udev Database...
Dec  3 12:08:44 np0005544501 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  3 12:08:44 np0005544501 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  3 12:08:44 np0005544501 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Stopped Create System Users.
Dec  3 12:08:44 np0005544501 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  3 12:08:44 np0005544501 systemd[1]: Finished Cleanup udev Database.
Dec  3 12:08:44 np0005544501 systemd[1]: Reached target Switch Root.
Dec  3 12:08:44 np0005544501 systemd[1]: Starting Switch Root...
Dec  3 12:08:44 np0005544501 systemd[1]: Switching root.
Dec  3 12:08:44 np0005544501 systemd-journald[309]: Journal stopped
Dec  3 12:08:45 np0005544501 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  3 12:08:45 np0005544501 kernel: audit: type=1404 audit(1764781724.786:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  3 12:08:45 np0005544501 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 12:08:45 np0005544501 kernel: SELinux:  policy capability open_perms=1
Dec  3 12:08:45 np0005544501 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 12:08:45 np0005544501 kernel: SELinux:  policy capability always_check_network=0
Dec  3 12:08:45 np0005544501 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 12:08:45 np0005544501 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 12:08:45 np0005544501 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 12:08:45 np0005544501 kernel: audit: type=1403 audit(1764781724.917:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  3 12:08:45 np0005544501 systemd: Successfully loaded SELinux policy in 134.621ms.
Dec  3 12:08:45 np0005544501 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.126ms.
Dec  3 12:08:45 np0005544501 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  3 12:08:45 np0005544501 systemd: Detected virtualization kvm.
Dec  3 12:08:45 np0005544501 systemd: Detected architecture x86-64.
Dec  3 12:08:45 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:08:45 np0005544501 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  3 12:08:45 np0005544501 systemd: Stopped Switch Root.
Dec  3 12:08:45 np0005544501 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  3 12:08:45 np0005544501 systemd: Created slice Slice /system/getty.
Dec  3 12:08:45 np0005544501 systemd: Created slice Slice /system/serial-getty.
Dec  3 12:08:45 np0005544501 systemd: Created slice Slice /system/sshd-keygen.
Dec  3 12:08:45 np0005544501 systemd: Created slice User and Session Slice.
Dec  3 12:08:45 np0005544501 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  3 12:08:45 np0005544501 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  3 12:08:45 np0005544501 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  3 12:08:45 np0005544501 systemd: Reached target Local Encrypted Volumes.
Dec  3 12:08:45 np0005544501 systemd: Stopped target Switch Root.
Dec  3 12:08:45 np0005544501 systemd: Stopped target Initrd File Systems.
Dec  3 12:08:45 np0005544501 systemd: Stopped target Initrd Root File System.
Dec  3 12:08:45 np0005544501 systemd: Reached target Local Integrity Protected Volumes.
Dec  3 12:08:45 np0005544501 systemd: Reached target Path Units.
Dec  3 12:08:45 np0005544501 systemd: Reached target rpc_pipefs.target.
Dec  3 12:08:45 np0005544501 systemd: Reached target Slice Units.
Dec  3 12:08:45 np0005544501 systemd: Reached target Swaps.
Dec  3 12:08:45 np0005544501 systemd: Reached target Local Verity Protected Volumes.
Dec  3 12:08:45 np0005544501 systemd: Listening on RPCbind Server Activation Socket.
Dec  3 12:08:45 np0005544501 systemd: Reached target RPC Port Mapper.
Dec  3 12:08:45 np0005544501 systemd: Listening on Process Core Dump Socket.
Dec  3 12:08:45 np0005544501 systemd: Listening on initctl Compatibility Named Pipe.
Dec  3 12:08:45 np0005544501 systemd: Listening on udev Control Socket.
Dec  3 12:08:45 np0005544501 systemd: Listening on udev Kernel Socket.
Dec  3 12:08:45 np0005544501 systemd: Mounting Huge Pages File System...
Dec  3 12:08:45 np0005544501 systemd: Mounting POSIX Message Queue File System...
Dec  3 12:08:45 np0005544501 systemd: Mounting Kernel Debug File System...
Dec  3 12:08:45 np0005544501 systemd: Mounting Kernel Trace File System...
Dec  3 12:08:45 np0005544501 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  3 12:08:45 np0005544501 systemd: Starting Create List of Static Device Nodes...
Dec  3 12:08:45 np0005544501 systemd: Starting Load Kernel Module configfs...
Dec  3 12:08:45 np0005544501 systemd: Starting Load Kernel Module drm...
Dec  3 12:08:45 np0005544501 systemd: Starting Load Kernel Module efi_pstore...
Dec  3 12:08:45 np0005544501 systemd: Starting Load Kernel Module fuse...
Dec  3 12:08:45 np0005544501 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  3 12:08:45 np0005544501 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  3 12:08:45 np0005544501 systemd: Stopped File System Check on Root Device.
Dec  3 12:08:45 np0005544501 systemd: Stopped Journal Service.
Dec  3 12:08:45 np0005544501 systemd: Starting Journal Service...
Dec  3 12:08:45 np0005544501 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  3 12:08:45 np0005544501 systemd: Starting Generate network units from Kernel command line...
Dec  3 12:08:45 np0005544501 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  3 12:08:45 np0005544501 systemd: Starting Remount Root and Kernel File Systems...
Dec  3 12:08:45 np0005544501 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  3 12:08:45 np0005544501 systemd: Starting Apply Kernel Variables...
Dec  3 12:08:45 np0005544501 systemd: Starting Coldplug All udev Devices...
Dec  3 12:08:45 np0005544501 kernel: ACPI: bus type drm_connector registered
Dec  3 12:08:45 np0005544501 kernel: fuse: init (API version 7.37)
Dec  3 12:08:45 np0005544501 systemd-journald[680]: Journal started
Dec  3 12:08:45 np0005544501 systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  3 12:08:45 np0005544501 systemd[1]: Queued start job for default target Multi-User System.
Dec  3 12:08:45 np0005544501 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  3 12:08:45 np0005544501 systemd: Started Journal Service.
Dec  3 12:08:45 np0005544501 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  3 12:08:45 np0005544501 systemd[1]: Mounted Huge Pages File System.
Dec  3 12:08:45 np0005544501 systemd[1]: Mounted POSIX Message Queue File System.
Dec  3 12:08:45 np0005544501 systemd[1]: Mounted Kernel Debug File System.
Dec  3 12:08:45 np0005544501 systemd[1]: Mounted Kernel Trace File System.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Create List of Static Device Nodes.
Dec  3 12:08:45 np0005544501 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Load Kernel Module configfs.
Dec  3 12:08:45 np0005544501 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Load Kernel Module drm.
Dec  3 12:08:45 np0005544501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  3 12:08:45 np0005544501 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Load Kernel Module fuse.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Generate network units from Kernel command line.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Apply Kernel Variables.
Dec  3 12:08:45 np0005544501 systemd[1]: Mounting FUSE Control File System...
Dec  3 12:08:45 np0005544501 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  3 12:08:45 np0005544501 systemd[1]: Starting Rebuild Hardware Database...
Dec  3 12:08:45 np0005544501 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  3 12:08:45 np0005544501 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  3 12:08:45 np0005544501 systemd[1]: Starting Load/Save OS Random Seed...
Dec  3 12:08:45 np0005544501 systemd[1]: Starting Create System Users...
Dec  3 12:08:45 np0005544501 systemd[1]: Mounted FUSE Control File System.
Dec  3 12:08:45 np0005544501 systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  3 12:08:45 np0005544501 systemd-journald[680]: Received client request to flush runtime journal.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Load/Save OS Random Seed.
Dec  3 12:08:45 np0005544501 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Create System Users.
Dec  3 12:08:45 np0005544501 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Coldplug All udev Devices.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  3 12:08:45 np0005544501 systemd[1]: Reached target Preparation for Local File Systems.
Dec  3 12:08:45 np0005544501 systemd[1]: Reached target Local File Systems.
Dec  3 12:08:45 np0005544501 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  3 12:08:45 np0005544501 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  3 12:08:45 np0005544501 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  3 12:08:45 np0005544501 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  3 12:08:45 np0005544501 systemd[1]: Starting Automatic Boot Loader Update...
Dec  3 12:08:45 np0005544501 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  3 12:08:45 np0005544501 systemd[1]: Starting Create Volatile Files and Directories...
Dec  3 12:08:45 np0005544501 bootctl[697]: Couldn't find EFI system partition, skipping.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Automatic Boot Loader Update.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Create Volatile Files and Directories.
Dec  3 12:08:45 np0005544501 systemd[1]: Starting Security Auditing Service...
Dec  3 12:08:45 np0005544501 systemd[1]: Starting RPC Bind...
Dec  3 12:08:45 np0005544501 systemd[1]: Starting Rebuild Journal Catalog...
Dec  3 12:08:45 np0005544501 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  3 12:08:45 np0005544501 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  3 12:08:45 np0005544501 systemd[1]: Started RPC Bind.
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Rebuild Journal Catalog.
Dec  3 12:08:45 np0005544501 augenrules[708]: /sbin/augenrules: No change
Dec  3 12:08:45 np0005544501 augenrules[723]: No rules
Dec  3 12:08:45 np0005544501 augenrules[723]: enabled 1
Dec  3 12:08:45 np0005544501 augenrules[723]: failure 1
Dec  3 12:08:45 np0005544501 augenrules[723]: pid 702
Dec  3 12:08:45 np0005544501 augenrules[723]: rate_limit 0
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog_limit 8192
Dec  3 12:08:45 np0005544501 augenrules[723]: lost 0
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog 3
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog_wait_time 60000
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog_wait_time_actual 0
Dec  3 12:08:45 np0005544501 augenrules[723]: enabled 1
Dec  3 12:08:45 np0005544501 augenrules[723]: failure 1
Dec  3 12:08:45 np0005544501 augenrules[723]: pid 702
Dec  3 12:08:45 np0005544501 augenrules[723]: rate_limit 0
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog_limit 8192
Dec  3 12:08:45 np0005544501 augenrules[723]: lost 0
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog 3
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog_wait_time 60000
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog_wait_time_actual 0
Dec  3 12:08:45 np0005544501 augenrules[723]: enabled 1
Dec  3 12:08:45 np0005544501 augenrules[723]: failure 1
Dec  3 12:08:45 np0005544501 augenrules[723]: pid 702
Dec  3 12:08:45 np0005544501 augenrules[723]: rate_limit 0
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog_limit 8192
Dec  3 12:08:45 np0005544501 augenrules[723]: lost 0
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog 3
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog_wait_time 60000
Dec  3 12:08:45 np0005544501 augenrules[723]: backlog_wait_time_actual 0
Dec  3 12:08:45 np0005544501 systemd[1]: Started Security Auditing Service.
Dec  3 12:08:45 np0005544501 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  3 12:08:45 np0005544501 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  3 12:08:46 np0005544501 systemd[1]: Finished Rebuild Hardware Database.
Dec  3 12:08:46 np0005544501 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  3 12:08:46 np0005544501 systemd[1]: Starting Update is Completed...
Dec  3 12:08:46 np0005544501 systemd[1]: Finished Update is Completed.
Dec  3 12:08:46 np0005544501 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Dec  3 12:08:46 np0005544501 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  3 12:08:46 np0005544501 systemd[1]: Reached target System Initialization.
Dec  3 12:08:46 np0005544501 systemd[1]: Started dnf makecache --timer.
Dec  3 12:08:46 np0005544501 systemd[1]: Started Daily rotation of log files.
Dec  3 12:08:46 np0005544501 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  3 12:08:46 np0005544501 systemd[1]: Reached target Timer Units.
Dec  3 12:08:46 np0005544501 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  3 12:08:46 np0005544501 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  3 12:08:46 np0005544501 systemd[1]: Reached target Socket Units.
Dec  3 12:08:46 np0005544501 systemd[1]: Starting D-Bus System Message Bus...
Dec  3 12:08:46 np0005544501 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  3 12:08:46 np0005544501 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  3 12:08:46 np0005544501 systemd[1]: Starting Load Kernel Module configfs...
Dec  3 12:08:46 np0005544501 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  3 12:08:46 np0005544501 systemd[1]: Finished Load Kernel Module configfs.
Dec  3 12:08:46 np0005544501 systemd[1]: Started D-Bus System Message Bus.
Dec  3 12:08:46 np0005544501 systemd[1]: Reached target Basic System.
Dec  3 12:08:46 np0005544501 dbus-broker-lau[754]: Ready
Dec  3 12:08:46 np0005544501 systemd-udevd[736]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 12:08:46 np0005544501 systemd[1]: Starting NTP client/server...
Dec  3 12:08:46 np0005544501 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  3 12:08:46 np0005544501 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  3 12:08:46 np0005544501 systemd[1]: Starting IPv4 firewall with iptables...
Dec  3 12:08:46 np0005544501 systemd[1]: Started irqbalance daemon.
Dec  3 12:08:46 np0005544501 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  3 12:08:46 np0005544501 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 12:08:46 np0005544501 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 12:08:46 np0005544501 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 12:08:46 np0005544501 systemd[1]: Reached target sshd-keygen.target.
Dec  3 12:08:46 np0005544501 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  3 12:08:46 np0005544501 systemd[1]: Reached target User and Group Name Lookups.
Dec  3 12:08:46 np0005544501 systemd[1]: Starting User Login Management...
Dec  3 12:08:46 np0005544501 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  3 12:08:46 np0005544501 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  3 12:08:46 np0005544501 chronyd[792]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  3 12:08:46 np0005544501 chronyd[792]: Loaded 0 symmetric keys
Dec  3 12:08:46 np0005544501 chronyd[792]: Using right/UTC timezone to obtain leap second data
Dec  3 12:08:46 np0005544501 chronyd[792]: Loaded seccomp filter (level 2)
Dec  3 12:08:46 np0005544501 systemd[1]: Started NTP client/server.
Dec  3 12:08:46 np0005544501 systemd-logind[784]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  3 12:08:46 np0005544501 systemd-logind[784]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  3 12:08:46 np0005544501 systemd-logind[784]: New seat seat0.
Dec  3 12:08:46 np0005544501 systemd[1]: Started User Login Management.
Dec  3 12:08:46 np0005544501 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  3 12:08:46 np0005544501 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  3 12:08:46 np0005544501 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  3 12:08:46 np0005544501 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  3 12:08:46 np0005544501 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  3 12:08:46 np0005544501 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  3 12:08:46 np0005544501 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  3 12:08:46 np0005544501 kernel: Console: switching to colour dummy device 80x25
Dec  3 12:08:46 np0005544501 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  3 12:08:46 np0005544501 kernel: [drm] features: -context_init
Dec  3 12:08:46 np0005544501 kernel: [drm] number of scanouts: 1
Dec  3 12:08:46 np0005544501 kernel: [drm] number of cap sets: 0
Dec  3 12:08:46 np0005544501 kernel: kvm_amd: TSC scaling supported
Dec  3 12:08:46 np0005544501 kernel: kvm_amd: Nested Virtualization enabled
Dec  3 12:08:46 np0005544501 kernel: kvm_amd: Nested Paging enabled
Dec  3 12:08:46 np0005544501 kernel: kvm_amd: LBR virtualization supported
Dec  3 12:08:46 np0005544501 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  3 12:08:46 np0005544501 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  3 12:08:46 np0005544501 kernel: Console: switching to colour frame buffer device 128x48
Dec  3 12:08:46 np0005544501 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  3 12:08:46 np0005544501 iptables.init[777]: iptables: Applying firewall rules: [  OK  ]
Dec  3 12:08:46 np0005544501 systemd[1]: Finished IPv4 firewall with iptables.
Dec  3 12:08:46 np0005544501 cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 03 Dec 2025 17:08:46 +0000. Up 6.89 seconds.
Dec  3 12:08:47 np0005544501 systemd[1]: run-cloud\x2dinit-tmp-tmpyp0f3eqf.mount: Deactivated successfully.
Dec  3 12:08:47 np0005544501 systemd[1]: Starting Hostname Service...
Dec  3 12:08:47 np0005544501 systemd[1]: Started Hostname Service.
Dec  3 12:08:47 np0005544501 systemd-hostnamed[853]: Hostname set to <np0005544501.novalocal> (static)
Dec  3 12:08:47 np0005544501 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  3 12:08:47 np0005544501 systemd[1]: Reached target Preparation for Network.
Dec  3 12:08:47 np0005544501 systemd[1]: Starting Network Manager...
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4400] NetworkManager (version 1.54.1-1.el9) is starting... (boot:82de8b62-473e-4ed3-b378-399a4a16feb4)
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4404] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4466] manager[0x55c446d2c080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4503] hostname: hostname: using hostnamed
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4504] hostname: static hostname changed from (none) to "np0005544501.novalocal"
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4507] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4634] manager[0x55c446d2c080]: rfkill: Wi-Fi hardware radio set enabled
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4635] manager[0x55c446d2c080]: rfkill: WWAN hardware radio set enabled
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4669] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4669] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4670] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4670] manager: Networking is enabled by state file
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4671] settings: Loaded settings plugin: keyfile (internal)
Dec  3 12:08:47 np0005544501 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4680] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4707] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4718] dhcp: init: Using DHCP client 'internal'
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4720] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4730] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4737] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4743] device (lo): Activation: starting connection 'lo' (223361d4-8bf7-4611-9366-5605a29f25d0)
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4751] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4753] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4779] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4783] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4785] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4786] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4788] device (eth0): carrier: link connected
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4790] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4795] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4800] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4803] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4804] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4805] manager: NetworkManager state is now CONNECTING
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4806] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4811] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4813] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4849] dhcp4 (eth0): state changed new lease, address=38.102.83.70
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4855] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.4871] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:08:47 np0005544501 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 12:08:47 np0005544501 systemd[1]: Started Network Manager.
Dec  3 12:08:47 np0005544501 systemd[1]: Reached target Network.
Dec  3 12:08:47 np0005544501 systemd[1]: Starting Network Manager Wait Online...
Dec  3 12:08:47 np0005544501 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  3 12:08:47 np0005544501 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.5027] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.5030] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.5031] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.5038] device (lo): Activation: successful, device activated.
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.5043] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.5047] manager: NetworkManager state is now CONNECTED_SITE
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.5051] device (eth0): Activation: successful, device activated.
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.5056] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  3 12:08:47 np0005544501 NetworkManager[857]: <info>  [1764781727.5060] manager: startup complete
Dec  3 12:08:47 np0005544501 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  3 12:08:47 np0005544501 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  3 12:08:47 np0005544501 systemd[1]: Reached target NFS client services.
Dec  3 12:08:47 np0005544501 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  3 12:08:47 np0005544501 systemd[1]: Reached target Remote File Systems.
Dec  3 12:08:47 np0005544501 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  3 12:08:47 np0005544501 systemd[1]: Finished Network Manager Wait Online.
Dec  3 12:08:47 np0005544501 systemd[1]: Starting Cloud-init: Network Stage...
Dec  3 12:08:47 np0005544501 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 03 Dec 2025 17:08:47 +0000. Up 7.80 seconds.
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: |  eth0  | True |         38.102.83.70         | 255.255.255.0 | global | fa:16:3e:d7:78:d3 |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fed7:78d3/64 |       .       |  link  | fa:16:3e:d7:78:d3 |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  3 12:08:47 np0005544501 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  3 12:08:49 np0005544501 cloud-init[921]: Generating public/private rsa key pair.
Dec  3 12:08:49 np0005544501 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  3 12:08:49 np0005544501 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  3 12:08:49 np0005544501 cloud-init[921]: The key fingerprint is:
Dec  3 12:08:49 np0005544501 cloud-init[921]: SHA256:bBd3VWhFqBMnqWcRYQKbISLxgMvkfRBYIHch+joPsLQ root@np0005544501.novalocal
Dec  3 12:08:49 np0005544501 cloud-init[921]: The key's randomart image is:
Dec  3 12:08:49 np0005544501 cloud-init[921]: +---[RSA 3072]----+
Dec  3 12:08:49 np0005544501 cloud-init[921]: |.oX+=.. o.. ++ =*|
Dec  3 12:08:49 np0005544501 cloud-init[921]: |o= B . . + o= =. |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |=.. o   o ...B.  |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |.+ . . .  .o=.   |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |... .   S .o .   |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |oo.    . .       |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |=E               |
Dec  3 12:08:49 np0005544501 cloud-init[921]: | +               |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |  .              |
Dec  3 12:08:49 np0005544501 cloud-init[921]: +----[SHA256]-----+
Dec  3 12:08:49 np0005544501 cloud-init[921]: Generating public/private ecdsa key pair.
Dec  3 12:08:49 np0005544501 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  3 12:08:49 np0005544501 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  3 12:08:49 np0005544501 cloud-init[921]: The key fingerprint is:
Dec  3 12:08:49 np0005544501 cloud-init[921]: SHA256:DFgq0l395bbrcLQKrEi8L78mfWX+58/guDzyes4I+hE root@np0005544501.novalocal
Dec  3 12:08:49 np0005544501 cloud-init[921]: The key's randomart image is:
Dec  3 12:08:49 np0005544501 cloud-init[921]: +---[ECDSA 256]---+
Dec  3 12:08:49 np0005544501 cloud-init[921]: |      o.         |
Dec  3 12:08:49 np0005544501 cloud-init[921]: | . . =  .   .    |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |. o + .  . o     |
Dec  3 12:08:49 np0005544501 cloud-init[921]: | . .   o  . o    |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |        E  ...   |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |   .   . .o...   |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |    o.  =+. o..  |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |   .oooo.+oB+o.o |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |    o**+. +OX=o.o|
Dec  3 12:08:49 np0005544501 cloud-init[921]: +----[SHA256]-----+
Dec  3 12:08:49 np0005544501 cloud-init[921]: Generating public/private ed25519 key pair.
Dec  3 12:08:49 np0005544501 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  3 12:08:49 np0005544501 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  3 12:08:49 np0005544501 cloud-init[921]: The key fingerprint is:
Dec  3 12:08:49 np0005544501 cloud-init[921]: SHA256:tRHSGwA8zegeLR3sh8e9QJGHMnqRuqVx4c9AvCrKdV4 root@np0005544501.novalocal
Dec  3 12:08:49 np0005544501 cloud-init[921]: The key's randomart image is:
Dec  3 12:08:49 np0005544501 cloud-init[921]: +--[ED25519 256]--+
Dec  3 12:08:49 np0005544501 cloud-init[921]: |     ..Bo+o+     |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |      + &.*..    |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |     . X @o=     |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |      B %.*o.    |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |     . @S*.. .   |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |    o * E o .    |
Dec  3 12:08:49 np0005544501 cloud-init[921]: | . o + .         |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |  o   .          |
Dec  3 12:08:49 np0005544501 cloud-init[921]: |                 |
Dec  3 12:08:49 np0005544501 cloud-init[921]: +----[SHA256]-----+
Dec  3 12:08:49 np0005544501 systemd[1]: Finished Cloud-init: Network Stage.
Dec  3 12:08:49 np0005544501 systemd[1]: Reached target Cloud-config availability.
Dec  3 12:08:49 np0005544501 systemd[1]: Reached target Network is Online.
Dec  3 12:08:49 np0005544501 systemd[1]: Starting Cloud-init: Config Stage...
Dec  3 12:08:49 np0005544501 systemd[1]: Starting Crash recovery kernel arming...
Dec  3 12:08:49 np0005544501 systemd[1]: Starting Notify NFS peers of a restart...
Dec  3 12:08:49 np0005544501 systemd[1]: Starting System Logging Service...
Dec  3 12:08:49 np0005544501 sm-notify[1003]: Version 2.5.4 starting
Dec  3 12:08:49 np0005544501 systemd[1]: Starting OpenSSH server daemon...
Dec  3 12:08:49 np0005544501 systemd[1]: Starting Permit User Sessions...
Dec  3 12:08:49 np0005544501 systemd[1]: Started Notify NFS peers of a restart.
Dec  3 12:08:49 np0005544501 systemd[1]: Started OpenSSH server daemon.
Dec  3 12:08:49 np0005544501 systemd[1]: Finished Permit User Sessions.
Dec  3 12:08:49 np0005544501 systemd[1]: Started Command Scheduler.
Dec  3 12:08:49 np0005544501 systemd[1]: Started Getty on tty1.
Dec  3 12:08:49 np0005544501 systemd[1]: Started Serial Getty on ttyS0.
Dec  3 12:08:49 np0005544501 systemd[1]: Reached target Login Prompts.
Dec  3 12:08:49 np0005544501 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Dec  3 12:08:49 np0005544501 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  3 12:08:49 np0005544501 systemd[1]: Started System Logging Service.
Dec  3 12:08:49 np0005544501 systemd[1]: Reached target Multi-User System.
Dec  3 12:08:49 np0005544501 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  3 12:08:49 np0005544501 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  3 12:08:49 np0005544501 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  3 12:08:49 np0005544501 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 12:08:49 np0005544501 kdumpctl[1013]: kdump: No kdump initial ramdisk found.
Dec  3 12:08:49 np0005544501 kdumpctl[1013]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  3 12:08:49 np0005544501 cloud-init[1136]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 03 Dec 2025 17:08:49 +0000. Up 9.57 seconds.
Dec  3 12:08:49 np0005544501 systemd[1]: Finished Cloud-init: Config Stage.
Dec  3 12:08:49 np0005544501 systemd[1]: Starting Cloud-init: Final Stage...
Dec  3 12:08:49 np0005544501 dracut[1264]: dracut-057-102.git20250818.el9
Dec  3 12:08:50 np0005544501 cloud-init[1282]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 03 Dec 2025 17:08:49 +0000. Up 9.97 seconds.
Dec  3 12:08:50 np0005544501 cloud-init[1286]: #############################################################
Dec  3 12:08:50 np0005544501 cloud-init[1289]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  3 12:08:50 np0005544501 cloud-init[1297]: 256 SHA256:DFgq0l395bbrcLQKrEi8L78mfWX+58/guDzyes4I+hE root@np0005544501.novalocal (ECDSA)
Dec  3 12:08:50 np0005544501 dracut[1266]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  3 12:08:50 np0005544501 cloud-init[1305]: 256 SHA256:tRHSGwA8zegeLR3sh8e9QJGHMnqRuqVx4c9AvCrKdV4 root@np0005544501.novalocal (ED25519)
Dec  3 12:08:50 np0005544501 cloud-init[1311]: 3072 SHA256:bBd3VWhFqBMnqWcRYQKbISLxgMvkfRBYIHch+joPsLQ root@np0005544501.novalocal (RSA)
Dec  3 12:08:50 np0005544501 cloud-init[1313]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  3 12:08:50 np0005544501 cloud-init[1315]: #############################################################
Dec  3 12:08:50 np0005544501 cloud-init[1282]: Cloud-init v. 24.4-7.el9 finished at Wed, 03 Dec 2025 17:08:50 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.15 seconds
Dec  3 12:08:50 np0005544501 systemd[1]: Finished Cloud-init: Final Stage.
Dec  3 12:08:50 np0005544501 systemd[1]: Reached target Cloud-init target.
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  3 12:08:50 np0005544501 dracut[1266]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: memstrack is not available
Dec  3 12:08:51 np0005544501 dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  3 12:08:51 np0005544501 dracut[1266]: memstrack is not available
Dec  3 12:08:51 np0005544501 dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  3 12:08:51 np0005544501 dracut[1266]: *** Including module: systemd ***
Dec  3 12:08:51 np0005544501 dracut[1266]: *** Including module: fips ***
Dec  3 12:08:52 np0005544501 dracut[1266]: *** Including module: systemd-initrd ***
Dec  3 12:08:52 np0005544501 dracut[1266]: *** Including module: i18n ***
Dec  3 12:08:52 np0005544501 dracut[1266]: *** Including module: drm ***
Dec  3 12:08:52 np0005544501 chronyd[792]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Dec  3 12:08:52 np0005544501 chronyd[792]: System clock TAI offset set to 37 seconds
Dec  3 12:08:52 np0005544501 dracut[1266]: *** Including module: prefixdevname ***
Dec  3 12:08:52 np0005544501 dracut[1266]: *** Including module: kernel-modules ***
Dec  3 12:08:52 np0005544501 kernel: block vda: the capability attribute has been deprecated.
Dec  3 12:08:53 np0005544501 dracut[1266]: *** Including module: kernel-modules-extra ***
Dec  3 12:08:53 np0005544501 dracut[1266]: *** Including module: qemu ***
Dec  3 12:08:53 np0005544501 dracut[1266]: *** Including module: fstab-sys ***
Dec  3 12:08:53 np0005544501 dracut[1266]: *** Including module: rootfs-block ***
Dec  3 12:08:53 np0005544501 dracut[1266]: *** Including module: terminfo ***
Dec  3 12:08:53 np0005544501 dracut[1266]: *** Including module: udev-rules ***
Dec  3 12:08:54 np0005544501 dracut[1266]: Skipping udev rule: 91-permissions.rules
Dec  3 12:08:54 np0005544501 dracut[1266]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  3 12:08:54 np0005544501 dracut[1266]: *** Including module: virtiofs ***
Dec  3 12:08:54 np0005544501 dracut[1266]: *** Including module: dracut-systemd ***
Dec  3 12:08:54 np0005544501 dracut[1266]: *** Including module: usrmount ***
Dec  3 12:08:54 np0005544501 dracut[1266]: *** Including module: base ***
Dec  3 12:08:54 np0005544501 dracut[1266]: *** Including module: fs-lib ***
Dec  3 12:08:54 np0005544501 dracut[1266]: *** Including module: kdumpbase ***
Dec  3 12:08:55 np0005544501 dracut[1266]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  3 12:08:55 np0005544501 dracut[1266]:  microcode_ctl module: mangling fw_dir
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: configuration "intel" is ignored
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  3 12:08:55 np0005544501 dracut[1266]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  3 12:08:55 np0005544501 dracut[1266]: *** Including module: openssl ***
Dec  3 12:08:55 np0005544501 dracut[1266]: *** Including module: shutdown ***
Dec  3 12:08:55 np0005544501 dracut[1266]: *** Including module: squash ***
Dec  3 12:08:55 np0005544501 dracut[1266]: *** Including modules done ***
Dec  3 12:08:55 np0005544501 dracut[1266]: *** Installing kernel module dependencies ***
Dec  3 12:08:56 np0005544501 dracut[1266]: *** Installing kernel module dependencies done ***
Dec  3 12:08:56 np0005544501 dracut[1266]: *** Resolving executable dependencies ***
Dec  3 12:08:57 np0005544501 irqbalance[782]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  3 12:08:57 np0005544501 irqbalance[782]: IRQ 25 affinity is now unmanaged
Dec  3 12:08:57 np0005544501 irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  3 12:08:57 np0005544501 irqbalance[782]: IRQ 31 affinity is now unmanaged
Dec  3 12:08:57 np0005544501 irqbalance[782]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  3 12:08:57 np0005544501 irqbalance[782]: IRQ 28 affinity is now unmanaged
Dec  3 12:08:57 np0005544501 irqbalance[782]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  3 12:08:57 np0005544501 irqbalance[782]: IRQ 32 affinity is now unmanaged
Dec  3 12:08:57 np0005544501 irqbalance[782]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  3 12:08:57 np0005544501 irqbalance[782]: IRQ 30 affinity is now unmanaged
Dec  3 12:08:57 np0005544501 irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  3 12:08:57 np0005544501 irqbalance[782]: IRQ 29 affinity is now unmanaged
Dec  3 12:08:57 np0005544501 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 12:08:58 np0005544501 dracut[1266]: *** Resolving executable dependencies done ***
Dec  3 12:08:58 np0005544501 dracut[1266]: *** Generating early-microcode cpio image ***
Dec  3 12:08:58 np0005544501 dracut[1266]: *** Store current command line parameters ***
Dec  3 12:08:58 np0005544501 dracut[1266]: Stored kernel commandline:
Dec  3 12:08:58 np0005544501 dracut[1266]: No dracut internal kernel commandline stored in the initramfs
Dec  3 12:08:58 np0005544501 dracut[1266]: *** Install squash loader ***
Dec  3 12:08:59 np0005544501 dracut[1266]: *** Squashing the files inside the initramfs ***
Dec  3 12:09:00 np0005544501 dracut[1266]: *** Squashing the files inside the initramfs done ***
Dec  3 12:09:00 np0005544501 dracut[1266]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  3 12:09:00 np0005544501 dracut[1266]: *** Hardlinking files ***
Dec  3 12:09:00 np0005544501 dracut[1266]: *** Hardlinking files done ***
Dec  3 12:09:00 np0005544501 dracut[1266]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  3 12:09:01 np0005544501 kdumpctl[1013]: kdump: kexec: loaded kdump kernel
Dec  3 12:09:01 np0005544501 kdumpctl[1013]: kdump: Starting kdump: [OK]
Dec  3 12:09:01 np0005544501 systemd[1]: Finished Crash recovery kernel arming.
Dec  3 12:09:01 np0005544501 systemd[1]: Startup finished in 1.936s (kernel) + 2.842s (initrd) + 16.503s (userspace) = 21.282s.
Dec  3 12:09:07 np0005544501 systemd[1]: Created slice User Slice of UID 1000.
Dec  3 12:09:07 np0005544501 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  3 12:09:07 np0005544501 systemd-logind[784]: New session 1 of user zuul.
Dec  3 12:09:07 np0005544501 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  3 12:09:07 np0005544501 systemd[1]: Starting User Manager for UID 1000...
Dec  3 12:09:07 np0005544501 systemd[4297]: Queued start job for default target Main User Target.
Dec  3 12:09:07 np0005544501 systemd[4297]: Created slice User Application Slice.
Dec  3 12:09:07 np0005544501 systemd[4297]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  3 12:09:07 np0005544501 systemd[4297]: Started Daily Cleanup of User's Temporary Directories.
Dec  3 12:09:07 np0005544501 systemd[4297]: Reached target Paths.
Dec  3 12:09:07 np0005544501 systemd[4297]: Reached target Timers.
Dec  3 12:09:07 np0005544501 systemd[4297]: Starting D-Bus User Message Bus Socket...
Dec  3 12:09:07 np0005544501 systemd[4297]: Starting Create User's Volatile Files and Directories...
Dec  3 12:09:07 np0005544501 systemd[4297]: Listening on D-Bus User Message Bus Socket.
Dec  3 12:09:07 np0005544501 systemd[4297]: Reached target Sockets.
Dec  3 12:09:07 np0005544501 systemd[4297]: Finished Create User's Volatile Files and Directories.
Dec  3 12:09:07 np0005544501 systemd[4297]: Reached target Basic System.
Dec  3 12:09:07 np0005544501 systemd[4297]: Reached target Main User Target.
Dec  3 12:09:07 np0005544501 systemd[4297]: Startup finished in 113ms.
Dec  3 12:09:07 np0005544501 systemd[1]: Started User Manager for UID 1000.
Dec  3 12:09:07 np0005544501 systemd[1]: Started Session 1 of User zuul.
Dec  3 12:09:08 np0005544501 python3[4379]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:09:10 np0005544501 python3[4407]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:09:16 np0005544501 python3[4465]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:09:17 np0005544501 python3[4505]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  3 12:09:17 np0005544501 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  3 12:09:18 np0005544501 python3[4533]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHn1P2vjw5Zq4tCOfIqX3j/YvZhxrmR91tANUGDriBjn043KHZfNRWnB3t8RrhUUtLwQNGbgpNLDV4DZoNjlI70Mz6pdcLVlxCvF5mZbWx956+nvU6tyWV0LtBAQQoZiJB21Rd/TEc7OPH+9noJAAiKu5FabJkfZKg3iC49H9jDd6Pat9lWA8WKpmNBl6RJQlc742pE0IPSqqn3ZNVI7S/AMCv5AxAjDNFPqO6wQvIHEwUZ5YMfraUQsUALnA71+uhu1WsPlfUZj0kmKIYSm86w0o6SqoC240r+Z6KZ4t9e7258Ft6WjBDlQUQhTsXtUGTWz/w4mMXChy2NnMi3O8BrI9RurogIborzwsZ53l3VXL6QxUKeRCvPzi2xDQi+E9e06IsN3ytGFzTFLn2onTPZbGm0CnZml01ZoCElNqxt++x8GwOwApiGGDH/2MQkgA3bgIVnTEP82qa54dh+OVxyfRyIIBKdPAnmZclqFpNVZv7TfRcni2IiX0NMFMKV0U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:19 np0005544501 python3[4557]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:19 np0005544501 python3[4656]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:09:20 np0005544501 python3[4727]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764781759.4523222-207-18225957766434/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=10387547917e43368adc1e8d1a55426e_id_rsa follow=False checksum=d204ebbff93ecece55dad7b5885a1edda0e977bc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:20 np0005544501 python3[4850]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:09:20 np0005544501 python3[4921]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764781760.36419-240-134195208480780/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=10387547917e43368adc1e8d1a55426e_id_rsa.pub follow=False checksum=d6193569a32c3da669fcf0901a4b9d1e8b880a91 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:22 np0005544501 python3[4969]: ansible-ping Invoked with data=pong
Dec  3 12:09:23 np0005544501 python3[4993]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:09:25 np0005544501 python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  3 12:09:26 np0005544501 python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:26 np0005544501 python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:26 np0005544501 python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:27 np0005544501 python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:27 np0005544501 python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:27 np0005544501 python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:29 np0005544501 python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:30 np0005544501 python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:09:30 np0005544501 python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764781769.6836338-21-106843526566812/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:31 np0005544501 python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:31 np0005544501 python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:31 np0005544501 python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:32 np0005544501 python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:32 np0005544501 python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:32 np0005544501 python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:32 np0005544501 python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:33 np0005544501 python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:33 np0005544501 python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:33 np0005544501 python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:34 np0005544501 python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:34 np0005544501 python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:34 np0005544501 python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:34 np0005544501 python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:35 np0005544501 python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:35 np0005544501 python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:35 np0005544501 python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:36 np0005544501 python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:36 np0005544501 python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:36 np0005544501 python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:36 np0005544501 python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:37 np0005544501 python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:37 np0005544501 python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:37 np0005544501 python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:38 np0005544501 python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:38 np0005544501 python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:09:41 np0005544501 python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  3 12:09:41 np0005544501 systemd[1]: Starting Time & Date Service...
Dec  3 12:09:41 np0005544501 systemd[1]: Started Time & Date Service.
Dec  3 12:09:41 np0005544501 systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Dec  3 12:09:41 np0005544501 python3[6090]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:42 np0005544501 python3[6166]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:09:42 np0005544501 python3[6237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764781781.81539-153-162282633405080/source _original_basename=tmpway5_nfa follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:42 np0005544501 python3[6337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:09:43 np0005544501 python3[6408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764781782.7122247-183-123319289121910/source _original_basename=tmppzrwtuvg follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:44 np0005544501 python3[6510]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:09:44 np0005544501 python3[6583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764781783.7904012-231-130491859384076/source _original_basename=tmpb99c3juz follow=False checksum=18e69b4e7a766afddcd5db28cd6f47889284b7a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:45 np0005544501 python3[6631]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:09:45 np0005544501 python3[6657]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:09:45 np0005544501 python3[6737]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:09:46 np0005544501 python3[6810]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764781785.5358543-273-97996996946439/source _original_basename=tmpn4o_c_19 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:09:46 np0005544501 python3[6861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-ef8c-8f3f-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:09:47 np0005544501 python3[6889]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-ef8c-8f3f-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  3 12:09:48 np0005544501 python3[6918]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:10:07 np0005544501 python3[6946]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:10:11 np0005544501 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  3 12:10:43 np0005544501 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  3 12:10:43 np0005544501 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  3 12:10:43 np0005544501 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  3 12:10:43 np0005544501 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  3 12:10:43 np0005544501 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  3 12:10:43 np0005544501 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  3 12:10:43 np0005544501 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  3 12:10:43 np0005544501 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  3 12:10:43 np0005544501 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  3 12:10:43 np0005544501 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  3 12:10:43 np0005544501 NetworkManager[857]: <info>  [1764781843.3190] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  3 12:10:43 np0005544501 systemd-udevd[6949]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 12:10:43 np0005544501 NetworkManager[857]: <info>  [1764781843.3416] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:10:43 np0005544501 NetworkManager[857]: <info>  [1764781843.3441] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  3 12:10:43 np0005544501 NetworkManager[857]: <info>  [1764781843.3444] device (eth1): carrier: link connected
Dec  3 12:10:43 np0005544501 NetworkManager[857]: <info>  [1764781843.3446] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  3 12:10:43 np0005544501 NetworkManager[857]: <info>  [1764781843.3451] policy: auto-activating connection 'Wired connection 1' (ec24716c-8f74-3086-9a41-27ccc9b9847a)
Dec  3 12:10:43 np0005544501 NetworkManager[857]: <info>  [1764781843.3455] device (eth1): Activation: starting connection 'Wired connection 1' (ec24716c-8f74-3086-9a41-27ccc9b9847a)
Dec  3 12:10:43 np0005544501 NetworkManager[857]: <info>  [1764781843.3455] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:10:43 np0005544501 NetworkManager[857]: <info>  [1764781843.3457] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:10:43 np0005544501 NetworkManager[857]: <info>  [1764781843.3460] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:10:43 np0005544501 NetworkManager[857]: <info>  [1764781843.3463] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  3 12:10:44 np0005544501 python3[6976]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-fffe-2a19-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:10:54 np0005544501 python3[7058]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:10:54 np0005544501 python3[7131]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764781853.7598941-102-117084661158650/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=4835a09d07ba6e32c9afe5d80a98bcf345587c5f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:10:55 np0005544501 python3[7181]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:10:55 np0005544501 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  3 12:10:55 np0005544501 systemd[1]: Stopped Network Manager Wait Online.
Dec  3 12:10:55 np0005544501 systemd[1]: Stopping Network Manager Wait Online...
Dec  3 12:10:55 np0005544501 systemd[1]: Stopping Network Manager...
Dec  3 12:10:55 np0005544501 NetworkManager[857]: <info>  [1764781855.2133] caught SIGTERM, shutting down normally.
Dec  3 12:10:55 np0005544501 NetworkManager[857]: <info>  [1764781855.2140] dhcp4 (eth0): canceled DHCP transaction
Dec  3 12:10:55 np0005544501 NetworkManager[857]: <info>  [1764781855.2140] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 12:10:55 np0005544501 NetworkManager[857]: <info>  [1764781855.2140] dhcp4 (eth0): state changed no lease
Dec  3 12:10:55 np0005544501 NetworkManager[857]: <info>  [1764781855.2142] manager: NetworkManager state is now CONNECTING
Dec  3 12:10:55 np0005544501 NetworkManager[857]: <info>  [1764781855.2255] dhcp4 (eth1): canceled DHCP transaction
Dec  3 12:10:55 np0005544501 NetworkManager[857]: <info>  [1764781855.2255] dhcp4 (eth1): state changed no lease
Dec  3 12:10:55 np0005544501 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 12:10:55 np0005544501 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 12:10:55 np0005544501 NetworkManager[857]: <info>  [1764781855.2492] exiting (success)
Dec  3 12:10:55 np0005544501 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  3 12:10:55 np0005544501 systemd[1]: Stopped Network Manager.
Dec  3 12:10:55 np0005544501 systemd[1]: Starting Network Manager...
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.3272] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:82de8b62-473e-4ed3-b378-399a4a16feb4)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.3273] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.3339] manager[0x561aff776070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  3 12:10:55 np0005544501 systemd[1]: Starting Hostname Service...
Dec  3 12:10:55 np0005544501 systemd[1]: Started Hostname Service.
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4193] hostname: hostname: using hostnamed
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4193] hostname: static hostname changed from (none) to "np0005544501.novalocal"
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4200] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4205] manager[0x561aff776070]: rfkill: Wi-Fi hardware radio set enabled
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4206] manager[0x561aff776070]: rfkill: WWAN hardware radio set enabled
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4229] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4230] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4230] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4231] manager: Networking is enabled by state file
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4233] settings: Loaded settings plugin: keyfile (internal)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4237] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4291] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4309] dhcp: init: Using DHCP client 'internal'
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4315] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4323] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4333] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4349] device (lo): Activation: starting connection 'lo' (223361d4-8bf7-4611-9366-5605a29f25d0)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4362] device (eth0): carrier: link connected
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4371] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4381] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4382] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4398] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4410] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4421] device (eth1): carrier: link connected
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4429] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4439] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (ec24716c-8f74-3086-9a41-27ccc9b9847a) (indicated)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4440] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4450] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4462] device (eth1): Activation: starting connection 'Wired connection 1' (ec24716c-8f74-3086-9a41-27ccc9b9847a)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4472] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  3 12:10:55 np0005544501 systemd[1]: Started Network Manager.
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4481] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4485] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4488] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4491] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4506] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4509] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4511] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4515] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4521] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4523] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4530] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4532] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4551] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4552] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4556] device (lo): Activation: successful, device activated.
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4563] dhcp4 (eth0): state changed new lease, address=38.102.83.70
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4580] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  3 12:10:55 np0005544501 systemd[1]: Starting Network Manager Wait Online...
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4694] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4721] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4723] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4727] manager: NetworkManager state is now CONNECTED_SITE
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4733] device (eth0): Activation: successful, device activated.
Dec  3 12:10:55 np0005544501 NetworkManager[7198]: <info>  [1764781855.4742] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  3 12:10:55 np0005544501 python3[7265]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-fffe-2a19-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:11:05 np0005544501 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 12:11:19 np0005544501 systemd[4297]: Starting Mark boot as successful...
Dec  3 12:11:20 np0005544501 systemd[4297]: Finished Mark boot as successful.
Dec  3 12:11:25 np0005544501 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0062] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  3 12:11:41 np0005544501 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 12:11:41 np0005544501 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0417] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0419] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0429] device (eth1): Activation: successful, device activated.
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0436] manager: startup complete
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0439] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <warn>  [1764781901.0450] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0456] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  3 12:11:41 np0005544501 systemd[1]: Finished Network Manager Wait Online.
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0605] dhcp4 (eth1): canceled DHCP transaction
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0606] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0607] dhcp4 (eth1): state changed no lease
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0622] policy: auto-activating connection 'ci-private-network' (0b03f328-f929-5aa8-8edd-88e9d5453df2)
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0627] device (eth1): Activation: starting connection 'ci-private-network' (0b03f328-f929-5aa8-8edd-88e9d5453df2)
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0629] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0632] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0640] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.0649] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.4947] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.4953] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:11:41 np0005544501 NetworkManager[7198]: <info>  [1764781901.4963] device (eth1): Activation: successful, device activated.
Dec  3 12:11:51 np0005544501 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 12:11:55 np0005544501 systemd-logind[784]: Session 1 logged out. Waiting for processes to exit.
Dec  3 12:12:06 np0005544501 systemd-logind[784]: New session 3 of user zuul.
Dec  3 12:12:06 np0005544501 systemd[1]: Started Session 3 of User zuul.
Dec  3 12:12:06 np0005544501 python3[7378]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:12:06 np0005544501 python3[7451]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764781926.3793824-267-273264832299672/source _original_basename=tmpv3jiy7zr follow=False checksum=08354c50b9581e1994e5282f251f6f7bccc7b12a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:12:09 np0005544501 systemd[1]: session-3.scope: Deactivated successfully.
Dec  3 12:12:09 np0005544501 systemd-logind[784]: Session 3 logged out. Waiting for processes to exit.
Dec  3 12:12:09 np0005544501 systemd-logind[784]: Removed session 3.
Dec  3 12:14:19 np0005544501 systemd[4297]: Created slice User Background Tasks Slice.
Dec  3 12:14:19 np0005544501 systemd[4297]: Starting Cleanup of User's Temporary Files and Directories...
Dec  3 12:14:19 np0005544501 systemd[4297]: Finished Cleanup of User's Temporary Files and Directories.
Dec  3 12:19:23 np0005544501 systemd-logind[784]: New session 4 of user zuul.
Dec  3 12:19:23 np0005544501 systemd[1]: Started Session 4 of User zuul.
Dec  3 12:19:23 np0005544501 python3[7533]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-3ec9-330e-000000001cd8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:19:23 np0005544501 python3[7562]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:19:24 np0005544501 python3[7588]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:19:24 np0005544501 python3[7614]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:19:24 np0005544501 python3[7640]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:19:25 np0005544501 python3[7666]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:19:25 np0005544501 python3[7744]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:19:25 np0005544501 python3[7817]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764782365.2944734-479-199800616903277/source _original_basename=tmp4g50xmuy follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:19:26 np0005544501 python3[7867]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:19:26 np0005544501 systemd[1]: Reloading.
Dec  3 12:19:26 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:19:28 np0005544501 python3[7923]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  3 12:19:28 np0005544501 python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:19:29 np0005544501 python3[7978]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:19:29 np0005544501 python3[8006]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:19:29 np0005544501 python3[8034]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:19:30 np0005544501 python3[8061]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-3ec9-330e-000000001cdf-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:19:30 np0005544501 python3[8091]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 12:19:32 np0005544501 systemd[1]: session-4.scope: Deactivated successfully.
Dec  3 12:19:32 np0005544501 systemd[1]: session-4.scope: Consumed 4.192s CPU time.
Dec  3 12:19:32 np0005544501 systemd-logind[784]: Session 4 logged out. Waiting for processes to exit.
Dec  3 12:19:32 np0005544501 systemd-logind[784]: Removed session 4.
Dec  3 12:19:34 np0005544501 systemd-logind[784]: New session 5 of user zuul.
Dec  3 12:19:34 np0005544501 systemd[1]: Started Session 5 of User zuul.
Dec  3 12:19:34 np0005544501 python3[8124]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 12:19:53 np0005544501 kernel: SELinux:  Converting 385 SID table entries...
Dec  3 12:19:53 np0005544501 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 12:19:53 np0005544501 kernel: SELinux:  policy capability open_perms=1
Dec  3 12:19:53 np0005544501 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 12:19:53 np0005544501 kernel: SELinux:  policy capability always_check_network=0
Dec  3 12:19:53 np0005544501 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 12:19:53 np0005544501 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 12:19:53 np0005544501 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 12:20:05 np0005544501 kernel: SELinux:  Converting 385 SID table entries...
Dec  3 12:20:05 np0005544501 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 12:20:05 np0005544501 kernel: SELinux:  policy capability open_perms=1
Dec  3 12:20:05 np0005544501 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 12:20:05 np0005544501 kernel: SELinux:  policy capability always_check_network=0
Dec  3 12:20:05 np0005544501 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 12:20:05 np0005544501 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 12:20:05 np0005544501 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 12:20:18 np0005544501 kernel: SELinux:  Converting 385 SID table entries...
Dec  3 12:20:18 np0005544501 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 12:20:18 np0005544501 kernel: SELinux:  policy capability open_perms=1
Dec  3 12:20:18 np0005544501 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 12:20:18 np0005544501 kernel: SELinux:  policy capability always_check_network=0
Dec  3 12:20:18 np0005544501 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 12:20:18 np0005544501 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 12:20:18 np0005544501 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 12:20:21 np0005544501 setsebool[8194]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  3 12:20:21 np0005544501 setsebool[8194]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  3 12:20:33 np0005544501 kernel: SELinux:  Converting 388 SID table entries...
Dec  3 12:20:33 np0005544501 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 12:20:33 np0005544501 kernel: SELinux:  policy capability open_perms=1
Dec  3 12:20:33 np0005544501 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 12:20:33 np0005544501 kernel: SELinux:  policy capability always_check_network=0
Dec  3 12:20:33 np0005544501 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 12:20:33 np0005544501 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 12:20:33 np0005544501 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 12:20:56 np0005544501 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  3 12:20:56 np0005544501 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 12:20:56 np0005544501 systemd[1]: Starting man-db-cache-update.service...
Dec  3 12:20:56 np0005544501 systemd[1]: Reloading.
Dec  3 12:20:56 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:20:56 np0005544501 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 12:21:00 np0005544501 python3[10953]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-ce99-5901-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:21:01 np0005544501 kernel: evm: overlay not supported
Dec  3 12:21:01 np0005544501 systemd[4297]: Starting D-Bus User Message Bus...
Dec  3 12:21:01 np0005544501 dbus-broker-launch[12073]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  3 12:21:01 np0005544501 dbus-broker-launch[12073]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  3 12:21:01 np0005544501 systemd[4297]: Started D-Bus User Message Bus.
Dec  3 12:21:01 np0005544501 dbus-broker-lau[12073]: Ready
Dec  3 12:21:01 np0005544501 systemd[4297]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  3 12:21:01 np0005544501 systemd[4297]: Created slice Slice /user.
Dec  3 12:21:01 np0005544501 systemd[4297]: podman-11944.scope: unit configures an IP firewall, but not running as root.
Dec  3 12:21:01 np0005544501 systemd[4297]: (This warning is only shown for the first unit using IP firewalling.)
Dec  3 12:21:01 np0005544501 systemd[4297]: Started podman-11944.scope.
Dec  3 12:21:01 np0005544501 systemd[4297]: Started podman-pause-1311d935.scope.
Dec  3 12:21:02 np0005544501 python3[12710]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.111:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.111:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:21:02 np0005544501 python3[12710]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  3 12:21:02 np0005544501 systemd[1]: session-5.scope: Deactivated successfully.
Dec  3 12:21:02 np0005544501 systemd[1]: session-5.scope: Consumed 1min 6.358s CPU time.
Dec  3 12:21:02 np0005544501 systemd-logind[784]: Session 5 logged out. Waiting for processes to exit.
Dec  3 12:21:02 np0005544501 systemd-logind[784]: Removed session 5.
Dec  3 12:21:24 np0005544501 systemd-logind[784]: New session 6 of user zuul.
Dec  3 12:21:24 np0005544501 systemd[1]: Started Session 6 of User zuul.
Dec  3 12:21:24 np0005544501 python3[22563]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOhdUWWqzqkbky7WXR9z8lr7gjHYB10ec0h0+EyIjo7hkfsDWdGDmXFvDYpamRUOtuvi1FqgPcbqLw5M/v7S/94= zuul@np0005544500.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:21:25 np0005544501 python3[22722]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOhdUWWqzqkbky7WXR9z8lr7gjHYB10ec0h0+EyIjo7hkfsDWdGDmXFvDYpamRUOtuvi1FqgPcbqLw5M/v7S/94= zuul@np0005544500.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:21:26 np0005544501 python3[23076]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005544501.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  3 12:21:26 np0005544501 python3[23270]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOhdUWWqzqkbky7WXR9z8lr7gjHYB10ec0h0+EyIjo7hkfsDWdGDmXFvDYpamRUOtuvi1FqgPcbqLw5M/v7S/94= zuul@np0005544500.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 12:21:26 np0005544501 python3[23550]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:21:27 np0005544501 irqbalance[782]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  3 12:21:27 np0005544501 irqbalance[782]: IRQ 27 affinity is now unmanaged
Dec  3 12:21:27 np0005544501 python3[23813]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764782486.6279926-135-89037813466418/source _original_basename=tmpjnbp4qji follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:21:28 np0005544501 python3[24158]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  3 12:21:28 np0005544501 systemd[1]: Starting Hostname Service...
Dec  3 12:21:28 np0005544501 systemd[1]: Started Hostname Service.
Dec  3 12:21:28 np0005544501 systemd-hostnamed[24279]: Changed pretty hostname to 'compute-0'
Dec  3 12:21:28 np0005544501 systemd-hostnamed[24279]: Hostname set to <compute-0> (static)
Dec  3 12:21:28 np0005544501 NetworkManager[7198]: <info>  [1764782488.3222] hostname: static hostname changed from "np0005544501.novalocal" to "compute-0"
Dec  3 12:21:28 np0005544501 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 12:21:28 np0005544501 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 12:21:28 np0005544501 systemd[1]: session-6.scope: Deactivated successfully.
Dec  3 12:21:28 np0005544501 systemd[1]: session-6.scope: Consumed 2.309s CPU time.
Dec  3 12:21:28 np0005544501 systemd-logind[784]: Session 6 logged out. Waiting for processes to exit.
Dec  3 12:21:28 np0005544501 systemd-logind[784]: Removed session 6.
Dec  3 12:21:38 np0005544501 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 12:21:51 np0005544501 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 12:21:51 np0005544501 systemd[1]: Finished man-db-cache-update.service.
Dec  3 12:21:51 np0005544501 systemd[1]: man-db-cache-update.service: Consumed 53.398s CPU time.
Dec  3 12:21:51 np0005544501 systemd[1]: run-r64ae540b691144efbdd20da26eaaf84d.service: Deactivated successfully.
Dec  3 12:21:58 np0005544501 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  3 12:23:40 np0005544501 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  3 12:23:40 np0005544501 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  3 12:23:40 np0005544501 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  3 12:23:40 np0005544501 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  3 12:26:08 np0005544501 systemd-logind[784]: New session 7 of user zuul.
Dec  3 12:26:09 np0005544501 systemd[1]: Started Session 7 of User zuul.
Dec  3 12:26:09 np0005544501 python3[30051]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:26:11 np0005544501 python3[30167]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:26:11 np0005544501 python3[30240]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764782770.8480513-33802-89810936718964/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:26:11 np0005544501 python3[30266]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:26:12 np0005544501 python3[30339]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764782770.8480513-33802-89810936718964/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:26:12 np0005544501 python3[30365]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:26:13 np0005544501 python3[30438]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764782770.8480513-33802-89810936718964/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:26:13 np0005544501 python3[30464]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:26:13 np0005544501 python3[30537]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764782770.8480513-33802-89810936718964/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:26:13 np0005544501 python3[30563]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:26:14 np0005544501 python3[30636]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764782770.8480513-33802-89810936718964/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:26:14 np0005544501 python3[30662]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:26:15 np0005544501 python3[30735]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764782770.8480513-33802-89810936718964/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:26:15 np0005544501 python3[30761]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 12:26:15 np0005544501 python3[30834]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764782770.8480513-33802-89810936718964/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:28:59 np0005544501 python3[30893]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:33:59 np0005544501 systemd[1]: session-7.scope: Deactivated successfully.
Dec  3 12:33:59 np0005544501 systemd[1]: session-7.scope: Consumed 5.124s CPU time.
Dec  3 12:33:59 np0005544501 systemd-logind[784]: Session 7 logged out. Waiting for processes to exit.
Dec  3 12:33:59 np0005544501 systemd-logind[784]: Removed session 7.
Dec  3 12:41:39 np0005544501 systemd-logind[784]: New session 8 of user zuul.
Dec  3 12:41:39 np0005544501 systemd[1]: Started Session 8 of User zuul.
Dec  3 12:41:40 np0005544501 python3.9[31078]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:41:42 np0005544501 python3.9[31259]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:41:49 np0005544501 systemd[1]: session-8.scope: Deactivated successfully.
Dec  3 12:41:49 np0005544501 systemd[1]: session-8.scope: Consumed 8.007s CPU time.
Dec  3 12:41:49 np0005544501 systemd-logind[784]: Session 8 logged out. Waiting for processes to exit.
Dec  3 12:41:49 np0005544501 systemd-logind[784]: Removed session 8.
Dec  3 12:42:05 np0005544501 systemd-logind[784]: New session 9 of user zuul.
Dec  3 12:42:05 np0005544501 systemd[1]: Started Session 9 of User zuul.
Dec  3 12:42:06 np0005544501 python3.9[31469]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  3 12:42:07 np0005544501 python3.9[31643]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:42:08 np0005544501 python3.9[31795]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:42:09 np0005544501 python3.9[31948]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:42:10 np0005544501 python3.9[32100]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:42:10 np0005544501 python3.9[32252]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:42:11 np0005544501 python3.9[32375]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764783730.2142985-73-234068556279594/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:42:12 np0005544501 python3.9[32527]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:42:13 np0005544501 python3.9[32683]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:42:13 np0005544501 python3.9[32835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:42:14 np0005544501 python3.9[32985]: ansible-ansible.builtin.service_facts Invoked
Dec  3 12:42:17 np0005544501 python3.9[33238]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:42:18 np0005544501 python3.9[33388]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:42:19 np0005544501 python3.9[33542]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:42:20 np0005544501 python3.9[33700]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 12:42:21 np0005544501 python3.9[33784]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:43:09 np0005544501 systemd[1]: Reloading.
Dec  3 12:43:09 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:43:10 np0005544501 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  3 12:43:10 np0005544501 systemd[1]: Reloading.
Dec  3 12:43:10 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:43:10 np0005544501 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  3 12:43:10 np0005544501 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  3 12:43:10 np0005544501 systemd[1]: Reloading.
Dec  3 12:43:10 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:43:10 np0005544501 systemd[1]: Starting dnf makecache...
Dec  3 12:43:10 np0005544501 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  3 12:43:11 np0005544501 dnf[34087]: Failed determining last makecache time.
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-openstack-barbican-42b4c41831408a8e323 149 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 144 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-openstack-cinder-1c00d6490d88e436f26ef 143 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-python-stevedore-c4acc5639fd2329372142 169 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-python-cloudkitty-tests-tempest-2c80f8 191 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 155 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 178 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  3 12:43:11 np0005544501 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-python-designate-tests-tempest-347fdbc 203 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-openstack-glance-1fd12c29b339f30fe823e 168 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 130 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-openstack-manila-3c01b7181572c95dac462 162 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-python-whitebox-neutron-tests-tempest- 157 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-openstack-octavia-ba397f07a7331190208c 148 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-openstack-watcher-c014f81a8647287f6dcc 168 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-ansible-config_template-5ccaa22121a7ff 166 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 138 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-openstack-swift-dc98a8463506ac520c469a 166 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-python-tempestconf-8515371b7cceebd4282 161 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: delorean-openstack-heat-ui-013accbfd179753bc3f0 143 kB/s | 3.0 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: CentOS Stream 9 - BaseOS                         50 kB/s | 6.4 kB     00:00
Dec  3 12:43:11 np0005544501 dnf[34087]: CentOS Stream 9 - AppStream                      62 kB/s | 6.5 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: CentOS Stream 9 - CRB                            37 kB/s | 6.3 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: CentOS Stream 9 - Extras packages                67 kB/s | 8.3 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: dlrn-antelope-testing                           145 kB/s | 3.0 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: dlrn-antelope-build-deps                        133 kB/s | 3.0 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: centos9-rabbitmq                                114 kB/s | 3.0 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: centos9-storage                                 135 kB/s | 3.0 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: centos9-opstools                                130 kB/s | 3.0 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: NFV SIG OpenvSwitch                              98 kB/s | 3.0 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: repo-setup-centos-appstream                     181 kB/s | 4.4 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: repo-setup-centos-baseos                         16 kB/s | 3.9 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: repo-setup-centos-highavailability              154 kB/s | 3.9 kB     00:00
Dec  3 12:43:12 np0005544501 dnf[34087]: repo-setup-centos-powertools                    162 kB/s | 4.3 kB     00:00
Dec  3 12:43:13 np0005544501 dnf[34087]: Extra Packages for Enterprise Linux 9 - x86_64  250 kB/s |  34 kB     00:00
Dec  3 12:43:13 np0005544501 dnf[34087]: Metadata cache created.
Dec  3 12:43:14 np0005544501 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  3 12:43:14 np0005544501 systemd[1]: Finished dnf makecache.
Dec  3 12:43:14 np0005544501 systemd[1]: dnf-makecache.service: Consumed 1.816s CPU time.
Dec  3 12:44:28 np0005544501 kernel: SELinux:  Converting 2718 SID table entries...
Dec  3 12:44:28 np0005544501 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 12:44:28 np0005544501 kernel: SELinux:  policy capability open_perms=1
Dec  3 12:44:28 np0005544501 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 12:44:28 np0005544501 kernel: SELinux:  policy capability always_check_network=0
Dec  3 12:44:28 np0005544501 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 12:44:28 np0005544501 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 12:44:28 np0005544501 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 12:44:29 np0005544501 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  3 12:44:29 np0005544501 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 12:44:29 np0005544501 systemd[1]: Starting man-db-cache-update.service...
Dec  3 12:44:29 np0005544501 systemd[1]: Reloading.
Dec  3 12:44:29 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:44:29 np0005544501 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 12:44:30 np0005544501 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 12:44:30 np0005544501 systemd[1]: Finished man-db-cache-update.service.
Dec  3 12:44:30 np0005544501 systemd[1]: man-db-cache-update.service: Consumed 1.367s CPU time.
Dec  3 12:44:30 np0005544501 systemd[1]: run-r5c284634c6d045c69a3a772e5dfc0181.service: Deactivated successfully.
Dec  3 12:44:30 np0005544501 python3.9[35406]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:44:33 np0005544501 python3.9[35687]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  3 12:44:33 np0005544501 python3.9[35841]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  3 12:44:36 np0005544501 python3.9[35996]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:44:37 np0005544501 python3.9[36152]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  3 12:44:38 np0005544501 python3.9[36306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:44:39 np0005544501 python3.9[36458]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:44:39 np0005544501 python3.9[36581]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764783878.9246182-236-275423541367711/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad59a59e9b9a4b3219c10debf9b016d113b8fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:44:40 np0005544501 python3.9[36735]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:44:41 np0005544501 python3.9[36887]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:44:42 np0005544501 python3.9[37042]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:44:42 np0005544501 python3.9[37195]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  3 12:44:43 np0005544501 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 12:44:43 np0005544501 python3.9[37350]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 12:44:47 np0005544501 python3.9[37508]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  3 12:44:51 np0005544501 python3.9[37674]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  3 12:44:52 np0005544501 python3.9[37827]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 12:44:53 np0005544501 python3.9[37985]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  3 12:44:54 np0005544501 python3.9[38139]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:44:56 np0005544501 python3.9[38293]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:44:57 np0005544501 python3.9[38445]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:44:57 np0005544501 python3.9[38568]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764783896.7751756-355-225942857836524/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:44:58 np0005544501 python3.9[38722]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:44:58 np0005544501 systemd[1]: Starting Load Kernel Modules...
Dec  3 12:44:58 np0005544501 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  3 12:44:58 np0005544501 kernel: Bridge firewalling registered
Dec  3 12:44:58 np0005544501 systemd-modules-load[38726]: Inserted module 'br_netfilter'
Dec  3 12:44:58 np0005544501 systemd[1]: Finished Load Kernel Modules.
Dec  3 12:44:59 np0005544501 python3.9[38884]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:45:00 np0005544501 python3.9[39007]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764783899.1156366-378-170545739816975/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:45:01 np0005544501 python3.9[39159]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:45:05 np0005544501 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  3 12:45:05 np0005544501 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  3 12:45:06 np0005544501 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 12:45:06 np0005544501 systemd[1]: Starting man-db-cache-update.service...
Dec  3 12:45:06 np0005544501 systemd[1]: Reloading.
Dec  3 12:45:06 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:45:06 np0005544501 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 12:45:07 np0005544501 python3.9[40680]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:45:08 np0005544501 python3.9[41888]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  3 12:45:09 np0005544501 python3.9[42721]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:45:09 np0005544501 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 12:45:09 np0005544501 systemd[1]: Finished man-db-cache-update.service.
Dec  3 12:45:09 np0005544501 systemd[1]: man-db-cache-update.service: Consumed 4.370s CPU time.
Dec  3 12:45:09 np0005544501 systemd[1]: run-r46fd1499e69242a7aa31fd730bec9055.service: Deactivated successfully.
Dec  3 12:45:09 np0005544501 python3.9[43358]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:45:09 np0005544501 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  3 12:45:10 np0005544501 systemd[1]: Starting Authorization Manager...
Dec  3 12:45:10 np0005544501 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  3 12:45:10 np0005544501 polkitd[43576]: Started polkitd version 0.117
Dec  3 12:45:10 np0005544501 systemd[1]: Started Authorization Manager.
Dec  3 12:45:11 np0005544501 python3.9[43746]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:45:11 np0005544501 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  3 12:45:11 np0005544501 systemd[1]: tuned.service: Deactivated successfully.
Dec  3 12:45:11 np0005544501 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  3 12:45:11 np0005544501 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  3 12:45:11 np0005544501 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  3 12:45:12 np0005544501 python3.9[43908]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  3 12:45:14 np0005544501 python3.9[44062]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:45:14 np0005544501 systemd[1]: Reloading.
Dec  3 12:45:14 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:45:15 np0005544501 python3.9[44250]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:45:15 np0005544501 systemd[1]: Reloading.
Dec  3 12:45:15 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:45:16 np0005544501 python3.9[44440]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:45:17 np0005544501 python3.9[44593]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:45:17 np0005544501 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  3 12:45:17 np0005544501 python3.9[44746]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:45:19 np0005544501 python3.9[44908]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:45:20 np0005544501 python3.9[45063]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:45:20 np0005544501 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  3 12:45:20 np0005544501 systemd[1]: Stopped Apply Kernel Variables.
Dec  3 12:45:20 np0005544501 systemd[1]: Stopping Apply Kernel Variables...
Dec  3 12:45:20 np0005544501 systemd[1]: Starting Apply Kernel Variables...
Dec  3 12:45:20 np0005544501 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  3 12:45:20 np0005544501 systemd[1]: Finished Apply Kernel Variables.
Dec  3 12:45:20 np0005544501 systemd[1]: session-9.scope: Deactivated successfully.
Dec  3 12:45:20 np0005544501 systemd[1]: session-9.scope: Consumed 2min 14.616s CPU time.
Dec  3 12:45:20 np0005544501 systemd-logind[784]: Session 9 logged out. Waiting for processes to exit.
Dec  3 12:45:20 np0005544501 systemd-logind[784]: Removed session 9.
Dec  3 12:45:26 np0005544501 systemd-logind[784]: New session 10 of user zuul.
Dec  3 12:45:26 np0005544501 systemd[1]: Started Session 10 of User zuul.
Dec  3 12:45:27 np0005544501 python3.9[45246]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:45:28 np0005544501 python3.9[45402]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  3 12:45:29 np0005544501 python3.9[45555]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 12:45:30 np0005544501 python3.9[45713]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  3 12:45:31 np0005544501 python3.9[45873]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 12:45:32 np0005544501 python3.9[45957]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  3 12:45:35 np0005544501 python3.9[46120]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:45:46 np0005544501 kernel: SELinux:  Converting 2730 SID table entries...
Dec  3 12:45:46 np0005544501 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 12:45:46 np0005544501 kernel: SELinux:  policy capability open_perms=1
Dec  3 12:45:46 np0005544501 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 12:45:46 np0005544501 kernel: SELinux:  policy capability always_check_network=0
Dec  3 12:45:46 np0005544501 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 12:45:46 np0005544501 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 12:45:46 np0005544501 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 12:45:47 np0005544501 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  3 12:45:47 np0005544501 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  3 12:45:48 np0005544501 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 12:45:48 np0005544501 systemd[1]: Starting man-db-cache-update.service...
Dec  3 12:45:48 np0005544501 systemd[1]: Reloading.
Dec  3 12:45:48 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:45:48 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:45:48 np0005544501 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 12:45:49 np0005544501 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 12:45:49 np0005544501 systemd[1]: Finished man-db-cache-update.service.
Dec  3 12:45:49 np0005544501 systemd[1]: run-r1650890b96bc40f8a414ca870a87ac9b.service: Deactivated successfully.
Dec  3 12:45:50 np0005544501 python3.9[47218]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 12:45:50 np0005544501 systemd[1]: Reloading.
Dec  3 12:45:50 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:45:50 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:45:50 np0005544501 systemd[1]: Starting Open vSwitch Database Unit...
Dec  3 12:45:50 np0005544501 chown[47261]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  3 12:45:50 np0005544501 ovs-ctl[47266]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  3 12:45:50 np0005544501 ovs-ctl[47266]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  3 12:45:51 np0005544501 ovs-ctl[47266]: Starting ovsdb-server [  OK  ]
Dec  3 12:45:51 np0005544501 ovs-vsctl[47315]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  3 12:45:51 np0005544501 ovs-vsctl[47334]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"1ac9fd0d-196b-4ea8-9a9a-8aa831092805\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  3 12:45:51 np0005544501 ovs-ctl[47266]: Configuring Open vSwitch system IDs [  OK  ]
Dec  3 12:45:51 np0005544501 ovs-vsctl[47340]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  3 12:45:51 np0005544501 ovs-ctl[47266]: Enabling remote OVSDB managers [  OK  ]
Dec  3 12:45:51 np0005544501 systemd[1]: Started Open vSwitch Database Unit.
Dec  3 12:45:51 np0005544501 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  3 12:45:51 np0005544501 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  3 12:45:51 np0005544501 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  3 12:45:51 np0005544501 kernel: openvswitch: Open vSwitch switching datapath
Dec  3 12:45:51 np0005544501 ovs-ctl[47384]: Inserting openvswitch module [  OK  ]
Dec  3 12:45:51 np0005544501 ovs-ctl[47353]: Starting ovs-vswitchd [  OK  ]
Dec  3 12:45:51 np0005544501 ovs-vsctl[47402]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  3 12:45:51 np0005544501 ovs-ctl[47353]: Enabling remote OVSDB managers [  OK  ]
Dec  3 12:45:51 np0005544501 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  3 12:45:51 np0005544501 systemd[1]: Starting Open vSwitch...
Dec  3 12:45:51 np0005544501 systemd[1]: Finished Open vSwitch.
Dec  3 12:45:52 np0005544501 python3.9[47553]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:45:53 np0005544501 python3.9[47705]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  3 12:45:54 np0005544501 kernel: SELinux:  Converting 2744 SID table entries...
Dec  3 12:45:54 np0005544501 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 12:45:54 np0005544501 kernel: SELinux:  policy capability open_perms=1
Dec  3 12:45:54 np0005544501 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 12:45:54 np0005544501 kernel: SELinux:  policy capability always_check_network=0
Dec  3 12:45:54 np0005544501 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 12:45:54 np0005544501 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 12:45:54 np0005544501 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 12:45:55 np0005544501 python3.9[47860]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:45:56 np0005544501 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  3 12:45:56 np0005544501 python3.9[48018]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:45:58 np0005544501 python3.9[48171]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:45:59 np0005544501 python3.9[48458]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  3 12:46:00 np0005544501 python3.9[48608]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:46:01 np0005544501 python3.9[48762]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:46:03 np0005544501 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 12:46:03 np0005544501 systemd[1]: Starting man-db-cache-update.service...
Dec  3 12:46:03 np0005544501 systemd[1]: Reloading.
Dec  3 12:46:03 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:46:03 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:46:03 np0005544501 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 12:46:03 np0005544501 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 12:46:03 np0005544501 systemd[1]: Finished man-db-cache-update.service.
Dec  3 12:46:03 np0005544501 systemd[1]: run-rc9efe88243da45a9a31cf80073a55075.service: Deactivated successfully.
Dec  3 12:46:04 np0005544501 python3.9[49079]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:46:04 np0005544501 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  3 12:46:04 np0005544501 systemd[1]: Stopped Network Manager Wait Online.
Dec  3 12:46:04 np0005544501 systemd[1]: Stopping Network Manager Wait Online...
Dec  3 12:46:04 np0005544501 systemd[1]: Stopping Network Manager...
Dec  3 12:46:04 np0005544501 NetworkManager[7198]: <info>  [1764783964.7578] caught SIGTERM, shutting down normally.
Dec  3 12:46:04 np0005544501 NetworkManager[7198]: <info>  [1764783964.7592] dhcp4 (eth0): canceled DHCP transaction
Dec  3 12:46:04 np0005544501 NetworkManager[7198]: <info>  [1764783964.7592] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 12:46:04 np0005544501 NetworkManager[7198]: <info>  [1764783964.7592] dhcp4 (eth0): state changed no lease
Dec  3 12:46:04 np0005544501 NetworkManager[7198]: <info>  [1764783964.7595] manager: NetworkManager state is now CONNECTED_SITE
Dec  3 12:46:04 np0005544501 NetworkManager[7198]: <info>  [1764783964.7669] exiting (success)
Dec  3 12:46:04 np0005544501 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 12:46:04 np0005544501 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  3 12:46:04 np0005544501 systemd[1]: Stopped Network Manager.
Dec  3 12:46:04 np0005544501 systemd[1]: NetworkManager.service: Consumed 14.199s CPU time, 4.1M memory peak, read 0B from disk, written 13.0K to disk.
Dec  3 12:46:04 np0005544501 systemd[1]: Starting Network Manager...
Dec  3 12:46:04 np0005544501 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.8153] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:82de8b62-473e-4ed3-b378-399a4a16feb4)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.8154] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.8214] manager[0x564aaaa9a090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  3 12:46:04 np0005544501 systemd[1]: Starting Hostname Service...
Dec  3 12:46:04 np0005544501 systemd[1]: Started Hostname Service.
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9055] hostname: hostname: using hostnamed
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9056] hostname: static hostname changed from (none) to "compute-0"
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9062] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9067] manager[0x564aaaa9a090]: rfkill: Wi-Fi hardware radio set enabled
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9067] manager[0x564aaaa9a090]: rfkill: WWAN hardware radio set enabled
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9092] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9104] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9105] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9105] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9106] manager: Networking is enabled by state file
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9108] settings: Loaded settings plugin: keyfile (internal)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9113] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9142] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9152] dhcp: init: Using DHCP client 'internal'
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9155] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9160] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9167] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9176] device (lo): Activation: starting connection 'lo' (223361d4-8bf7-4611-9366-5605a29f25d0)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9185] device (eth0): carrier: link connected
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9190] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9195] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9195] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9203] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9210] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9217] device (eth1): carrier: link connected
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9222] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9229] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (0b03f328-f929-5aa8-8edd-88e9d5453df2) (indicated)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9229] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9236] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9243] device (eth1): Activation: starting connection 'ci-private-network' (0b03f328-f929-5aa8-8edd-88e9d5453df2)
Dec  3 12:46:04 np0005544501 systemd[1]: Started Network Manager.
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9249] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9268] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9272] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9276] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9279] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9284] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9287] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9291] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9296] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9306] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9309] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9320] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9334] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9345] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9347] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9353] device (lo): Activation: successful, device activated.
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9361] dhcp4 (eth0): state changed new lease, address=38.102.83.70
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9370] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9437] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 systemd[1]: Starting Network Manager Wait Online...
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9442] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9449] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9453] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9457] device (eth1): Activation: successful, device activated.
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9469] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9470] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9474] manager: NetworkManager state is now CONNECTED_SITE
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9478] device (eth0): Activation: successful, device activated.
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9483] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  3 12:46:04 np0005544501 NetworkManager[49087]: <info>  [1764783964.9487] manager: startup complete
Dec  3 12:46:04 np0005544501 systemd[1]: Finished Network Manager Wait Online.
Dec  3 12:46:05 np0005544501 python3.9[49306]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:46:10 np0005544501 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 12:46:10 np0005544501 systemd[1]: Starting man-db-cache-update.service...
Dec  3 12:46:10 np0005544501 systemd[1]: Reloading.
Dec  3 12:46:10 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:46:10 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:46:10 np0005544501 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 12:46:12 np0005544501 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 12:46:12 np0005544501 systemd[1]: Finished man-db-cache-update.service.
Dec  3 12:46:12 np0005544501 systemd[1]: run-r806526c7dae148ed890a61e77d71a801.service: Deactivated successfully.
Dec  3 12:46:13 np0005544501 python3.9[49764]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:46:14 np0005544501 python3.9[49918]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:46:15 np0005544501 python3.9[50072]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:46:15 np0005544501 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 12:46:15 np0005544501 python3.9[50224]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:46:16 np0005544501 python3.9[50376]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:46:16 np0005544501 python3.9[50528]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:46:17 np0005544501 python3.9[50680]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:46:18 np0005544501 python3.9[50803]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764783977.0778604-229-130039161901438/.source _original_basename=.6hjwmnnn follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:46:18 np0005544501 python3.9[50955]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:46:19 np0005544501 python3.9[51107]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  3 12:46:20 np0005544501 python3.9[51259]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:46:22 np0005544501 python3.9[51686]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  3 12:46:23 np0005544501 ansible-async_wrapper.py[51861]: Invoked with j830543670471 300 /home/zuul/.ansible/tmp/ansible-tmp-1764783982.7796347-295-164282884810663/AnsiballZ_edpm_os_net_config.py _
Dec  3 12:46:23 np0005544501 ansible-async_wrapper.py[51864]: Starting module and watcher
Dec  3 12:46:23 np0005544501 ansible-async_wrapper.py[51864]: Start watching 51865 (300)
Dec  3 12:46:23 np0005544501 ansible-async_wrapper.py[51865]: Start module (51865)
Dec  3 12:46:23 np0005544501 ansible-async_wrapper.py[51861]: Return async_wrapper task started.
Dec  3 12:46:23 np0005544501 python3.9[51866]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  3 12:46:24 np0005544501 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  3 12:46:24 np0005544501 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  3 12:46:24 np0005544501 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  3 12:46:24 np0005544501 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  3 12:46:24 np0005544501 kernel: cfg80211: failed to load regulatory.db
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.5970] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.5990] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6520] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6523] audit: op="connection-add" uuid="0ea79a60-3832-4329-8849-c4ff78dd6708" name="br-ex-br" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6540] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6542] audit: op="connection-add" uuid="403e91a4-75b8-4098-ad69-1e62ebb6efbb" name="br-ex-port" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6556] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6558] audit: op="connection-add" uuid="b62460a7-7abe-410b-bc13-70cad04883c6" name="eth1-port" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6572] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6574] audit: op="connection-add" uuid="fa39d436-400c-4db6-b653-04c2048dd257" name="vlan20-port" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6587] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6589] audit: op="connection-add" uuid="ae1ed56b-f85f-428f-9f10-8580bd9e3f5b" name="vlan21-port" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6604] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6607] audit: op="connection-add" uuid="ebea2bf9-28f7-484b-bb84-193086bd6a85" name="vlan22-port" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6622] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6625] audit: op="connection-add" uuid="bca2f01f-afc1-4113-89a1-232ca4560ba7" name="vlan23-port" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6658] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6677] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6679] audit: op="connection-add" uuid="62620927-cfc3-4174-a4a6-c1a27292b454" name="br-ex-if" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6726] audit: op="connection-update" uuid="0b03f328-f929-5aa8-8edd-88e9d5453df2" name="ci-private-network" args="connection.controller,connection.port-type,connection.master,connection.slave-type,connection.timestamp,ipv4.addresses,ipv4.routes,ipv4.never-default,ipv4.dns,ipv4.method,ipv4.routing-rules,ipv6.addresses,ipv6.routes,ipv6.routing-rules,ipv6.dns,ipv6.addr-gen-mode,ipv6.method,ovs-interface.type,ovs-external-ids.data" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6747] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6749] audit: op="connection-add" uuid="48803826-284d-453f-8af8-eb034e607884" name="vlan20-if" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6768] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6770] audit: op="connection-add" uuid="302a7770-d988-4f19-8d40-876486a1013f" name="vlan21-if" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6788] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6791] audit: op="connection-add" uuid="48c023b3-59de-4bc5-8024-7fe9062e94cb" name="vlan22-if" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6809] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6811] audit: op="connection-add" uuid="af9190b4-db0e-4a42-bc2c-8d5f32f8cefe" name="vlan23-if" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6823] audit: op="connection-delete" uuid="ec24716c-8f74-3086-9a41-27ccc9b9847a" name="Wired connection 1" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6838] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6852] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6857] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (0ea79a60-3832-4329-8849-c4ff78dd6708)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6858] audit: op="connection-activate" uuid="0ea79a60-3832-4329-8849-c4ff78dd6708" name="br-ex-br" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6861] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6869] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6873] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (403e91a4-75b8-4098-ad69-1e62ebb6efbb)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6876] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6882] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6887] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (b62460a7-7abe-410b-bc13-70cad04883c6)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6890] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6897] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6902] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (fa39d436-400c-4db6-b653-04c2048dd257)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6905] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6913] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6918] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (ae1ed56b-f85f-428f-9f10-8580bd9e3f5b)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6920] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6926] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6930] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ebea2bf9-28f7-484b-bb84-193086bd6a85)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6931] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6937] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6941] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (bca2f01f-afc1-4113-89a1-232ca4560ba7)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6942] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6944] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6946] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6951] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6955] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6959] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (62620927-cfc3-4174-a4a6-c1a27292b454)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6959] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6963] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6965] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6966] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6967] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6977] device (eth1): disconnecting for new activation request.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6977] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6980] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6981] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6982] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6985] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6989] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6992] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (48803826-284d-453f-8af8-eb034e607884)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6993] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6996] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6997] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.6998] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7001] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7005] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7009] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (302a7770-d988-4f19-8d40-876486a1013f)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7009] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7012] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7013] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7015] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7017] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7021] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7025] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (48c023b3-59de-4bc5-8024-7fe9062e94cb)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7026] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7028] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7030] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7031] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7033] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7038] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7042] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (af9190b4-db0e-4a42-bc2c-8d5f32f8cefe)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7043] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7046] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7048] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7049] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7050] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7061] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7063] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7066] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7068] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7073] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7077] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7080] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7084] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7085] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 kernel: ovs-system: entered promiscuous mode
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7097] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7100] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7103] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7107] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7112] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 kernel: Timeout policy base is empty
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7118] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7122] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7124] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 systemd-udevd[51873]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7128] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7132] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7135] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7137] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7141] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7145] dhcp4 (eth0): canceled DHCP transaction
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7145] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7145] dhcp4 (eth0): state changed no lease
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7147] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7159] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7162] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51867 uid=0 result="fail" reason="Device is not activated"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7166] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7172] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7205] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7208] dhcp4 (eth0): state changed new lease, address=38.102.83.70
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7213] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7261] device (eth1): disconnecting for new activation request.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7263] audit: op="connection-activate" uuid="0b03f328-f929-5aa8-8edd-88e9d5453df2" name="ci-private-network" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7281] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  3 12:46:25 np0005544501 kernel: br-ex: entered promiscuous mode
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7452] device (eth1): Activation: starting connection 'ci-private-network' (0b03f328-f929-5aa8-8edd-88e9d5453df2)
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7456] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7457] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51867 uid=0 result="success"
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7465] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7468] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7474] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7477] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7485] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7486] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7487] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7488] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7489] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7490] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7493] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7498] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7501] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7505] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7509] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7512] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7516] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7519] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7523] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7526] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7529] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7532] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7536] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7540] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7543] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 systemd-udevd[51872]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 12:46:25 np0005544501 kernel: vlan22: entered promiscuous mode
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7555] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7564] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7602] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7606] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 kernel: vlan23: entered promiscuous mode
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7626] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7631] device (eth1): Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7635] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7639] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7679] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  3 12:46:25 np0005544501 kernel: vlan21: entered promiscuous mode
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7698] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7725] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7726] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7731] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7739] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  3 12:46:25 np0005544501 kernel: vlan20: entered promiscuous mode
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7756] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7799] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7800] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7805] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7814] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7826] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7866] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7867] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7870] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7875] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7889] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7932] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7933] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 12:46:25 np0005544501 NetworkManager[49087]: <info>  [1764783985.7939] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  3 12:46:26 np0005544501 NetworkManager[49087]: <info>  [1764783986.9273] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51867 uid=0 result="success"
Dec  3 12:46:27 np0005544501 NetworkManager[49087]: <info>  [1764783987.1070] checkpoint[0x564aaaa6f950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  3 12:46:27 np0005544501 NetworkManager[49087]: <info>  [1764783987.1072] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51867 uid=0 result="success"
Dec  3 12:46:27 np0005544501 python3.9[52225]: ansible-ansible.legacy.async_status Invoked with jid=j830543670471.51861 mode=status _async_dir=/root/.ansible_async
Dec  3 12:46:27 np0005544501 NetworkManager[49087]: <info>  [1764783987.4080] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51867 uid=0 result="success"
Dec  3 12:46:27 np0005544501 NetworkManager[49087]: <info>  [1764783987.4093] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51867 uid=0 result="success"
Dec  3 12:46:27 np0005544501 NetworkManager[49087]: <info>  [1764783987.6479] audit: op="networking-control" arg="global-dns-configuration" pid=51867 uid=0 result="success"
Dec  3 12:46:27 np0005544501 NetworkManager[49087]: <info>  [1764783987.6508] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  3 12:46:27 np0005544501 NetworkManager[49087]: <info>  [1764783987.6548] audit: op="networking-control" arg="global-dns-configuration" pid=51867 uid=0 result="success"
Dec  3 12:46:27 np0005544501 NetworkManager[49087]: <info>  [1764783987.6581] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51867 uid=0 result="success"
Dec  3 12:46:27 np0005544501 NetworkManager[49087]: <info>  [1764783987.8154] checkpoint[0x564aaaa6fa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  3 12:46:27 np0005544501 NetworkManager[49087]: <info>  [1764783987.8158] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51867 uid=0 result="success"
Dec  3 12:46:27 np0005544501 ansible-async_wrapper.py[51865]: Module complete (51865)
Dec  3 12:46:28 np0005544501 ansible-async_wrapper.py[51864]: Done in kid B.
Dec  3 12:46:30 np0005544501 python3.9[52330]: ansible-ansible.legacy.async_status Invoked with jid=j830543670471.51861 mode=status _async_dir=/root/.ansible_async
Dec  3 12:46:31 np0005544501 python3.9[52430]: ansible-ansible.legacy.async_status Invoked with jid=j830543670471.51861 mode=cleanup _async_dir=/root/.ansible_async
Dec  3 12:46:32 np0005544501 python3.9[52582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:46:32 np0005544501 python3.9[52705]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764783991.6431086-322-34809448738269/.source.returncode _original_basename=.t8dcxxu1 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:46:33 np0005544501 python3.9[52857]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:46:33 np0005544501 python3.9[52980]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764783992.8444536-338-90521155021358/.source.cfg _original_basename=.blykcgbh follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:46:34 np0005544501 python3.9[53133]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:46:34 np0005544501 systemd[1]: Reloading Network Manager...
Dec  3 12:46:34 np0005544501 NetworkManager[49087]: <info>  [1764783994.6719] audit: op="reload" arg="0" pid=53137 uid=0 result="success"
Dec  3 12:46:34 np0005544501 NetworkManager[49087]: <info>  [1764783994.6727] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  3 12:46:34 np0005544501 systemd[1]: Reloaded Network Manager.
Dec  3 12:46:34 np0005544501 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  3 12:46:35 np0005544501 systemd[1]: session-10.scope: Deactivated successfully.
Dec  3 12:46:35 np0005544501 systemd[1]: session-10.scope: Consumed 49.189s CPU time.
Dec  3 12:46:35 np0005544501 systemd-logind[784]: Session 10 logged out. Waiting for processes to exit.
Dec  3 12:46:35 np0005544501 systemd-logind[784]: Removed session 10.
Dec  3 12:46:40 np0005544501 systemd-logind[784]: New session 11 of user zuul.
Dec  3 12:46:40 np0005544501 systemd[1]: Started Session 11 of User zuul.
Dec  3 12:46:41 np0005544501 python3.9[53323]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:46:42 np0005544501 python3.9[53477]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 12:46:43 np0005544501 python3.9[53670]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:46:44 np0005544501 systemd[1]: session-11.scope: Deactivated successfully.
Dec  3 12:46:44 np0005544501 systemd[1]: session-11.scope: Consumed 2.382s CPU time.
Dec  3 12:46:44 np0005544501 systemd-logind[784]: Session 11 logged out. Waiting for processes to exit.
Dec  3 12:46:44 np0005544501 systemd-logind[784]: Removed session 11.
Dec  3 12:46:44 np0005544501 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 12:46:49 np0005544501 systemd-logind[784]: New session 12 of user zuul.
Dec  3 12:46:49 np0005544501 systemd[1]: Started Session 12 of User zuul.
Dec  3 12:46:50 np0005544501 python3.9[53853]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:46:51 np0005544501 python3.9[54007]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:46:52 np0005544501 python3.9[54163]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 12:46:53 np0005544501 python3.9[54247]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:46:55 np0005544501 python3.9[54401]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 12:46:57 np0005544501 python3.9[54597]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:46:58 np0005544501 python3.9[54749]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:46:58 np0005544501 systemd[1]: var-lib-containers-storage-overlay-compat2702562301-merged.mount: Deactivated successfully.
Dec  3 12:46:58 np0005544501 podman[54750]: 2025-12-03 17:46:58.611306422 +0000 UTC m=+0.095300495 system refresh
Dec  3 12:46:59 np0005544501 python3.9[54911]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:46:59 np0005544501 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 12:47:00 np0005544501 python3.9[55034]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784018.8225963-79-132486718691921/.source.json follow=False _original_basename=podman_network_config.j2 checksum=7e74a5c2b537fab1269bd168e93d4e97898e95e0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:01 np0005544501 python3.9[55186]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:01 np0005544501 python3.9[55309]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784020.4598908-94-192186217589914/.source.conf follow=False _original_basename=registries.conf.j2 checksum=ac70d66c4b5ab2334ac0357b84986ea734e0f27b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:47:02 np0005544501 python3.9[55461]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:47:03 np0005544501 python3.9[55613]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:47:03 np0005544501 python3.9[55765]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:47:04 np0005544501 python3.9[55917]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:47:05 np0005544501 python3.9[56070]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:47:07 np0005544501 irqbalance[782]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  3 12:47:07 np0005544501 irqbalance[782]: IRQ 26 affinity is now unmanaged
Dec  3 12:47:07 np0005544501 python3.9[56226]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:47:08 np0005544501 python3.9[56380]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:47:09 np0005544501 python3.9[56534]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:47:09 np0005544501 python3.9[56686]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:47:10 np0005544501 python3.9[56840]: ansible-service_facts Invoked
Dec  3 12:47:10 np0005544501 network[56858]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 12:47:10 np0005544501 network[56859]: 'network-scripts' will be removed from distribution in near future.
Dec  3 12:47:10 np0005544501 network[56860]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 12:47:15 np0005544501 python3.9[57314]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:47:18 np0005544501 python3.9[57467]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  3 12:47:20 np0005544501 python3.9[57619]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:20 np0005544501 python3.9[57744]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784039.5357325-238-11770249095935/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:21 np0005544501 python3.9[57898]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:22 np0005544501 python3.9[58023]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784040.9720738-253-211001428882100/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:23 np0005544501 python3.9[58177]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:24 np0005544501 python3.9[58331]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 12:47:25 np0005544501 python3.9[58415]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:47:26 np0005544501 python3.9[58569]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 12:47:27 np0005544501 python3.9[58653]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:47:27 np0005544501 systemd[1]: Stopping NTP client/server...
Dec  3 12:47:27 np0005544501 chronyd[792]: chronyd exiting
Dec  3 12:47:27 np0005544501 systemd[1]: chronyd.service: Deactivated successfully.
Dec  3 12:47:27 np0005544501 systemd[1]: Stopped NTP client/server.
Dec  3 12:47:27 np0005544501 systemd[1]: Starting NTP client/server...
Dec  3 12:47:27 np0005544501 chronyd[58662]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  3 12:47:27 np0005544501 chronyd[58662]: Frequency -24.142 +/- 0.315 ppm read from /var/lib/chrony/drift
Dec  3 12:47:27 np0005544501 chronyd[58662]: Loaded seccomp filter (level 2)
Dec  3 12:47:27 np0005544501 systemd[1]: Started NTP client/server.
Dec  3 12:47:28 np0005544501 systemd[1]: session-12.scope: Deactivated successfully.
Dec  3 12:47:28 np0005544501 systemd[1]: session-12.scope: Consumed 26.298s CPU time.
Dec  3 12:47:28 np0005544501 systemd-logind[784]: Session 12 logged out. Waiting for processes to exit.
Dec  3 12:47:28 np0005544501 systemd-logind[784]: Removed session 12.
Dec  3 12:47:33 np0005544501 systemd-logind[784]: New session 13 of user zuul.
Dec  3 12:47:33 np0005544501 systemd[1]: Started Session 13 of User zuul.
Dec  3 12:47:34 np0005544501 python3.9[58843]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:35 np0005544501 python3.9[58995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:35 np0005544501 python3.9[59118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784054.417422-34-135953963414695/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:36 np0005544501 systemd[1]: session-13.scope: Deactivated successfully.
Dec  3 12:47:36 np0005544501 systemd[1]: session-13.scope: Consumed 1.792s CPU time.
Dec  3 12:47:36 np0005544501 systemd-logind[784]: Session 13 logged out. Waiting for processes to exit.
Dec  3 12:47:36 np0005544501 systemd-logind[784]: Removed session 13.
Dec  3 12:47:41 np0005544501 systemd-logind[784]: New session 14 of user zuul.
Dec  3 12:47:41 np0005544501 systemd[1]: Started Session 14 of User zuul.
Dec  3 12:47:42 np0005544501 python3.9[59296]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:47:43 np0005544501 python3.9[59452]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:44 np0005544501 python3.9[59627]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:45 np0005544501 python3.9[59750]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764784063.8148515-41-93423216199726/.source.json _original_basename=.ij7teqpu follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:46 np0005544501 python3.9[59902]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:46 np0005544501 python3.9[60025]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784065.604995-64-94096608880024/.source _original_basename=.fq5wq6gu follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:47 np0005544501 python3.9[60179]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:47:48 np0005544501 python3.9[60331]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:48 np0005544501 python3.9[60454]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784067.4929187-88-229326134499627/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:47:49 np0005544501 python3.9[60606]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:49 np0005544501 python3.9[60729]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784068.8726394-88-156031261963424/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:47:50 np0005544501 python3.9[60881]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:51 np0005544501 python3.9[61033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:52 np0005544501 python3.9[61156]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784070.8838532-125-70448778370980/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:52 np0005544501 python3.9[61308]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:53 np0005544501 python3.9[61431]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784072.2430627-140-25794920248246/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:54 np0005544501 python3.9[61583]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:47:54 np0005544501 systemd[1]: Reloading.
Dec  3 12:47:54 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:47:54 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:47:54 np0005544501 systemd[1]: Reloading.
Dec  3 12:47:54 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:47:54 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:47:54 np0005544501 systemd[1]: Starting EDPM Container Shutdown...
Dec  3 12:47:54 np0005544501 systemd[1]: Finished EDPM Container Shutdown.
Dec  3 12:47:55 np0005544501 python3.9[61811]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:56 np0005544501 python3.9[61934]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784075.002456-163-104745634424451/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:56 np0005544501 python3.9[62086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:47:57 np0005544501 python3.9[62209]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784076.3554637-178-260670583192686/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:47:58 np0005544501 python3.9[62361]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:47:58 np0005544501 systemd[1]: Reloading.
Dec  3 12:47:58 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:47:58 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:47:58 np0005544501 systemd[1]: Reloading.
Dec  3 12:47:58 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:47:58 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:47:58 np0005544501 systemd[1]: Starting Create netns directory...
Dec  3 12:47:58 np0005544501 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 12:47:58 np0005544501 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 12:47:58 np0005544501 systemd[1]: Finished Create netns directory.
Dec  3 12:47:59 np0005544501 python3.9[62586]: ansible-ansible.builtin.service_facts Invoked
Dec  3 12:47:59 np0005544501 network[62603]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 12:47:59 np0005544501 network[62604]: 'network-scripts' will be removed from distribution in near future.
Dec  3 12:47:59 np0005544501 network[62605]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 12:48:04 np0005544501 python3.9[62867]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:48:04 np0005544501 systemd[1]: Reloading.
Dec  3 12:48:04 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:48:04 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:48:04 np0005544501 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  3 12:48:04 np0005544501 iptables.init[62907]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  3 12:48:04 np0005544501 iptables.init[62907]: iptables: Flushing firewall rules: [  OK  ]
Dec  3 12:48:04 np0005544501 systemd[1]: iptables.service: Deactivated successfully.
Dec  3 12:48:04 np0005544501 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  3 12:48:05 np0005544501 python3.9[63103]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:48:06 np0005544501 python3.9[63257]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:48:06 np0005544501 systemd[1]: Reloading.
Dec  3 12:48:06 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:48:06 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:48:06 np0005544501 systemd[1]: Starting Netfilter Tables...
Dec  3 12:48:06 np0005544501 systemd[1]: Finished Netfilter Tables.
Dec  3 12:48:07 np0005544501 python3.9[63449]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:48:08 np0005544501 python3.9[63602]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:48:09 np0005544501 python3.9[63727]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784088.0204515-247-129222664615636/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:10 np0005544501 python3.9[63880]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:48:10 np0005544501 systemd[1]: Reloading OpenSSH server daemon...
Dec  3 12:48:10 np0005544501 systemd[1]: Reloaded OpenSSH server daemon.
Dec  3 12:48:10 np0005544501 python3.9[64036]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:11 np0005544501 python3.9[64188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:48:12 np0005544501 python3.9[64311]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784091.0255053-278-78320898034001/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:13 np0005544501 python3.9[64463]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  3 12:48:13 np0005544501 systemd[1]: Starting Time & Date Service...
Dec  3 12:48:13 np0005544501 systemd[1]: Started Time & Date Service.
Dec  3 12:48:14 np0005544501 python3.9[64619]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:14 np0005544501 python3.9[64771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:48:15 np0005544501 python3.9[64894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784094.2411067-313-86524070323733/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:16 np0005544501 python3.9[65046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:48:16 np0005544501 python3.9[65169]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784095.5643573-328-13375078472439/.source.yaml _original_basename=.5c2jhz1s follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:17 np0005544501 python3.9[65321]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:48:17 np0005544501 python3.9[65444]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784096.8666146-343-34313582731276/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:18 np0005544501 python3.9[65596]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:48:19 np0005544501 python3.9[65749]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:48:20 np0005544501 python3[65902]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 12:48:20 np0005544501 python3.9[66054]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:48:21 np0005544501 python3.9[66177]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784100.2151217-382-237171399547786/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:21 np0005544501 python3.9[66329]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:48:22 np0005544501 python3.9[66452]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784101.4757166-397-178463936064019/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:23 np0005544501 python3.9[66604]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:48:23 np0005544501 python3.9[66727]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784102.7227614-412-5430005783917/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:24 np0005544501 python3.9[66879]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:48:24 np0005544501 python3.9[67002]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784103.8801281-427-260699119106725/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:25 np0005544501 python3.9[67154]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:48:26 np0005544501 python3.9[67277]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784104.9858277-442-92112475942734/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:26 np0005544501 python3.9[67429]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:27 np0005544501 python3.9[67581]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:48:28 np0005544501 python3.9[67740]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:29 np0005544501 python3.9[67893]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:29 np0005544501 python3.9[68045]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:30 np0005544501 python3.9[68197]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  3 12:48:31 np0005544501 python3.9[68350]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  3 12:48:31 np0005544501 systemd[1]: session-14.scope: Deactivated successfully.
Dec  3 12:48:31 np0005544501 systemd[1]: session-14.scope: Consumed 35.725s CPU time.
Dec  3 12:48:31 np0005544501 systemd-logind[784]: Session 14 logged out. Waiting for processes to exit.
Dec  3 12:48:31 np0005544501 systemd-logind[784]: Removed session 14.
Dec  3 12:48:36 np0005544501 systemd-logind[784]: New session 15 of user zuul.
Dec  3 12:48:36 np0005544501 systemd[1]: Started Session 15 of User zuul.
Dec  3 12:48:37 np0005544501 python3.9[68531]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  3 12:48:38 np0005544501 python3.9[68683]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:48:39 np0005544501 python3.9[68835]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:48:40 np0005544501 python3.9[68987]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbjCJvulNQFoK9EDJkYMUpjmLdBbv8twOlkjrDwwZXYa6o4yvBrNIfIb2p0fEoMZ3HrJqI70KDITEDyHkMCLZqAu0qoue0ESJm+cxnuWsei974RYQSC2dZp3FAlkkh8Oe/2ShyNhNO4fZ436DKHDqAEgh6Bkfsk2rbZY/QAeeXXXePZzl9fpjUyRwOf5zf7+NTY1S6IQ8sPho08YY9ikbkKKxy8ioiyRSxsMIZFq7aM/jI++GFUMUVBkAWz0n9mywg2Z05glbO6YyrTLuEb9EBnFtwzYTbAIr9cZxyW7klLru4WvKK+gDPOOE/g0RW66n1JSCQ/HOG4uumVR7ivXMJu3+K/pqdXiq7MQuS8NCE/RRQagh9u793DJ12Q/tnyJENOsYmWzvEb0xUud56cPvxZG2uRremuHxuBADGFkiui9Hjb4Obw/9nMsNA/58q5wEX1YqYXilc35uhV2xfS9odHTReQGfFNOoAObFcXQAVrzbltLo7RBgnW7vwTOfXqdc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDga0dlcKFfXn6U1UHkHIyOLqdO6IBiPXa8xAuL28XMM#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGby1UAcqXsYegna+DufYvoZLrvWEcwQSpfRsN2Eer8IseIipIrVobBbBXr8E3TSR8/RubLA6TojHG2/nfFshtg=#012 create=True mode=0644 path=/tmp/ansible.7gdk04oi state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:41 np0005544501 python3.9[69139]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.7gdk04oi' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:48:42 np0005544501 python3.9[69293]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.7gdk04oi state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:42 np0005544501 systemd[1]: session-15.scope: Deactivated successfully.
Dec  3 12:48:42 np0005544501 systemd[1]: session-15.scope: Consumed 3.357s CPU time.
Dec  3 12:48:42 np0005544501 systemd-logind[784]: Session 15 logged out. Waiting for processes to exit.
Dec  3 12:48:42 np0005544501 systemd-logind[784]: Removed session 15.
Dec  3 12:48:43 np0005544501 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  3 12:48:47 np0005544501 systemd-logind[784]: New session 16 of user zuul.
Dec  3 12:48:47 np0005544501 systemd[1]: Started Session 16 of User zuul.
Dec  3 12:48:48 np0005544501 python3.9[69473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:48:50 np0005544501 python3.9[69629]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  3 12:48:51 np0005544501 python3.9[69783]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:48:51 np0005544501 python3.9[69936]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:48:52 np0005544501 python3.9[70089]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:48:53 np0005544501 python3.9[70243]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:48:54 np0005544501 python3.9[70398]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:48:54 np0005544501 systemd-logind[784]: Session 16 logged out. Waiting for processes to exit.
Dec  3 12:48:54 np0005544501 systemd[1]: session-16.scope: Deactivated successfully.
Dec  3 12:48:54 np0005544501 systemd[1]: session-16.scope: Consumed 4.306s CPU time.
Dec  3 12:48:54 np0005544501 systemd-logind[784]: Removed session 16.
Dec  3 12:49:00 np0005544501 systemd-logind[784]: New session 17 of user zuul.
Dec  3 12:49:00 np0005544501 systemd[1]: Started Session 17 of User zuul.
Dec  3 12:49:01 np0005544501 python3.9[70576]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:49:02 np0005544501 python3.9[70732]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 12:49:03 np0005544501 python3.9[70816]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  3 12:49:05 np0005544501 python3.9[70968]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:49:06 np0005544501 python3.9[71119]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 12:49:07 np0005544501 python3.9[71269]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:49:07 np0005544501 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 12:49:07 np0005544501 python3.9[71420]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:49:08 np0005544501 systemd[1]: session-17.scope: Deactivated successfully.
Dec  3 12:49:08 np0005544501 systemd[1]: session-17.scope: Consumed 5.857s CPU time.
Dec  3 12:49:08 np0005544501 systemd-logind[784]: Session 17 logged out. Waiting for processes to exit.
Dec  3 12:49:08 np0005544501 systemd-logind[784]: Removed session 17.
Dec  3 12:49:14 np0005544501 systemd-logind[784]: New session 18 of user zuul.
Dec  3 12:49:14 np0005544501 systemd[1]: Started Session 18 of User zuul.
Dec  3 12:49:15 np0005544501 python3.9[71598]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:49:17 np0005544501 python3.9[71754]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:18 np0005544501 python3.9[71906]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:18 np0005544501 python3.9[72058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:19 np0005544501 python3.9[72181]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784158.290212-65-139779405937334/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=40d43768bc9dda05b4bd2037ccaec1cdee3052b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:20 np0005544501 python3.9[72333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:20 np0005544501 python3.9[72456]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784159.9140646-65-83897483161146/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=1b590807379be7eeb07009f0b799c5da4a28336a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:21 np0005544501 python3.9[72608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:22 np0005544501 python3.9[72731]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784161.145152-65-189233687164878/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=eb8c9c86192382832937a3e0a45cfed8cd538650 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:22 np0005544501 python3.9[72883]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:23 np0005544501 python3.9[73035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:24 np0005544501 python3.9[73187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:24 np0005544501 python3.9[73310]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784163.8050668-124-156365519621155/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=107842184305d2ac1da1b6e690608001b05f0a70 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:25 np0005544501 python3.9[73462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:26 np0005544501 python3.9[73585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784165.0118268-124-137626365315740/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=1b590807379be7eeb07009f0b799c5da4a28336a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:26 np0005544501 python3.9[73737]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:27 np0005544501 python3.9[73860]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784166.255327-124-269571728276266/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c90ad9656b9f3a2fcfd13292174b8fdc2f5b0f14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:28 np0005544501 python3.9[74012]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:28 np0005544501 python3.9[74164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:29 np0005544501 python3.9[74316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:30 np0005544501 python3.9[74439]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784169.1239276-183-55417611609436/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b65af13daac3a7d1338388ace10588642d60c6a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:30 np0005544501 python3.9[74591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:31 np0005544501 python3.9[74714]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784170.385311-183-51089356151255/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3a2b781ee48dc8fc2242f1fff410e8dd1d23c546 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:32 np0005544501 python3.9[74868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:32 np0005544501 python3.9[74991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784171.6277056-183-262101366410771/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0ea6e5e3b444a52afbef14948862ef1045544dab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:33 np0005544501 python3.9[75143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:33 np0005544501 python3.9[75295]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:34 np0005544501 python3.9[75447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:35 np0005544501 python3.9[75570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784174.169395-242-98326406004746/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=48b577caf6639ecd5c0cff64b083a6a46fc065b8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:35 np0005544501 python3.9[75722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:36 np0005544501 python3.9[75846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784175.3355248-242-149487109454355/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=51aae6c674a856b81951449221fef5c7d3abbd8c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:36 np0005544501 chronyd[58662]: Selected source 206.108.0.133 (pool.ntp.org)
Dec  3 12:49:37 np0005544501 python3.9[75998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:37 np0005544501 python3.9[76121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784176.776389-242-119514210839019/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=eb67b627838da5b799d00a8209569ef7237e9873 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:38 np0005544501 python3.9[76273]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:39 np0005544501 python3.9[76425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:40 np0005544501 python3.9[76548]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784179.154105-310-45385456353927/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad59a59e9b9a4b3219c10debf9b016d113b8fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:40 np0005544501 python3.9[76700]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:41 np0005544501 python3.9[76852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:42 np0005544501 python3.9[76975]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784181.1655278-334-167288689688104/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad59a59e9b9a4b3219c10debf9b016d113b8fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:43 np0005544501 python3.9[77127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:43 np0005544501 python3.9[77279]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:44 np0005544501 python3.9[77402]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784183.311081-358-32739295953483/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad59a59e9b9a4b3219c10debf9b016d113b8fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:45 np0005544501 python3.9[77554]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:45 np0005544501 python3.9[77706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:46 np0005544501 python3.9[77829]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784185.3724883-382-169860896048880/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad59a59e9b9a4b3219c10debf9b016d113b8fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:47 np0005544501 python3.9[77981]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:47 np0005544501 python3.9[78133]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:48 np0005544501 python3.9[78256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784187.4450138-406-76738465751067/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad59a59e9b9a4b3219c10debf9b016d113b8fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:49 np0005544501 python3.9[78408]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:49 np0005544501 python3.9[78560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:49:50 np0005544501 python3.9[78683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784189.311619-430-214407979260587/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad59a59e9b9a4b3219c10debf9b016d113b8fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:49:50 np0005544501 systemd[1]: session-18.scope: Deactivated successfully.
Dec  3 12:49:50 np0005544501 systemd[1]: session-18.scope: Consumed 27.479s CPU time.
Dec  3 12:49:50 np0005544501 systemd-logind[784]: Session 18 logged out. Waiting for processes to exit.
Dec  3 12:49:50 np0005544501 systemd-logind[784]: Removed session 18.
Dec  3 12:49:55 np0005544501 systemd-logind[784]: New session 19 of user zuul.
Dec  3 12:49:55 np0005544501 systemd[1]: Started Session 19 of User zuul.
Dec  3 12:49:56 np0005544501 python3.9[78861]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:49:58 np0005544501 python3.9[79017]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:58 np0005544501 python3.9[79169]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:49:59 np0005544501 python3.9[79319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:50:00 np0005544501 python3.9[79471]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  3 12:50:03 np0005544501 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  3 12:50:03 np0005544501 python3.9[79627]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 12:50:04 np0005544501 python3.9[79712]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:50:07 np0005544501 python3.9[79865]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 12:50:08 np0005544501 python3[80020]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  3 12:50:09 np0005544501 python3.9[80172]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:10 np0005544501 python3.9[80324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:10 np0005544501 python3.9[80402]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:11 np0005544501 python3.9[80554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:11 np0005544501 python3.9[80633]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.loluohfr recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:12 np0005544501 python3.9[80785]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:12 np0005544501 python3.9[80863]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:13 np0005544501 python3.9[81015]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:50:14 np0005544501 python3[81168]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 12:50:15 np0005544501 python3.9[81320]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:15 np0005544501 python3.9[81445]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784214.721524-157-59161019077187/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:16 np0005544501 python3.9[81597]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:17 np0005544501 python3.9[81722]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784216.1272135-172-96738186024262/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:18 np0005544501 python3.9[81874]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:19 np0005544501 python3.9[81999]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784218.0278761-187-242284969612203/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:19 np0005544501 python3.9[82151]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:20 np0005544501 python3.9[82276]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784219.2637322-202-43253688886403/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:21 np0005544501 python3.9[82428]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:21 np0005544501 python3.9[82553]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784220.5466504-217-202917650901413/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:22 np0005544501 python3.9[82705]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:23 np0005544501 python3.9[82857]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:50:24 np0005544501 python3.9[83012]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:24 np0005544501 python3.9[83164]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:50:25 np0005544501 python3.9[83317]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:50:26 np0005544501 python3.9[83471]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:50:26 np0005544501 python3.9[83626]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:28 np0005544501 python3.9[83776]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:50:29 np0005544501 python3.9[83929]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:50:29 np0005544501 ovs-vsctl[83930]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  3 12:50:29 np0005544501 python3.9[84082]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:50:30 np0005544501 python3.9[84237]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:50:30 np0005544501 ovs-vsctl[84238]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  3 12:50:31 np0005544501 python3.9[84388]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:50:31 np0005544501 python3.9[84542]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:50:32 np0005544501 python3.9[84694]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:32 np0005544501 python3.9[84772]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:50:33 np0005544501 python3.9[84924]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:33 np0005544501 python3.9[85002]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:50:34 np0005544501 python3.9[85154]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:35 np0005544501 python3.9[85306]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:35 np0005544501 python3.9[85384]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:36 np0005544501 python3.9[85536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:36 np0005544501 python3.9[85614]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:37 np0005544501 python3.9[85766]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:50:37 np0005544501 systemd[1]: Reloading.
Dec  3 12:50:37 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:50:37 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:50:38 np0005544501 python3.9[85956]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:38 np0005544501 python3.9[86034]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:39 np0005544501 python3.9[86186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:40 np0005544501 python3.9[86264]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:40 np0005544501 python3.9[86416]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:50:40 np0005544501 systemd[1]: Reloading.
Dec  3 12:50:40 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:50:40 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:50:41 np0005544501 systemd[1]: Starting Create netns directory...
Dec  3 12:50:41 np0005544501 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 12:50:41 np0005544501 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 12:50:41 np0005544501 systemd[1]: Finished Create netns directory.
Dec  3 12:50:41 np0005544501 python3.9[86610]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:50:42 np0005544501 python3.9[86762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:42 np0005544501 python3.9[86885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784241.9772131-468-23897251255690/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:50:43 np0005544501 python3.9[87037]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:50:44 np0005544501 python3.9[87189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:50:45 np0005544501 python3.9[87312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784244.0406315-493-41691579501616/.source.json _original_basename=.ecsrrv7x follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:45 np0005544501 python3.9[87464]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:50:47 np0005544501 python3.9[87891]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  3 12:50:48 np0005544501 python3.9[88043]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 12:50:49 np0005544501 python3.9[88195]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  3 12:50:49 np0005544501 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 12:50:50 np0005544501 python3[88358]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 12:50:51 np0005544501 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 12:50:53 np0005544501 systemd[1]: var-lib-containers-storage-overlay-compat2996547537-lower\x2dmapped.mount: Deactivated successfully.
Dec  3 12:50:58 np0005544501 podman[88370]: 2025-12-03 17:50:58.806018707 +0000 UTC m=+7.778472166 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  3 12:50:58 np0005544501 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 12:50:58 np0005544501 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 12:50:59 np0005544501 podman[88490]: 2025-12-03 17:50:58.92867302 +0000 UTC m=+0.027818208 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  3 12:50:59 np0005544501 podman[88490]: 2025-12-03 17:50:59.030026051 +0000 UTC m=+0.129171149 container create 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 12:50:59 np0005544501 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 12:50:59 np0005544501 python3[88358]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  3 12:50:59 np0005544501 python3.9[88680]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:51:00 np0005544501 python3.9[88834]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:01 np0005544501 python3.9[88910]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:51:01 np0005544501 python3.9[89061]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784261.1057882-581-26389284437947/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:02 np0005544501 python3.9[89137]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:51:02 np0005544501 systemd[1]: Reloading.
Dec  3 12:51:02 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:51:02 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:51:03 np0005544501 python3.9[89249]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:51:03 np0005544501 systemd[1]: Reloading.
Dec  3 12:51:03 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:51:03 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:51:03 np0005544501 systemd[1]: Starting ovn_controller container...
Dec  3 12:51:03 np0005544501 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  3 12:51:03 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:51:03 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4243ec92431b0890d472837c14b348f6584b0ca47b50de70cfab47621b799ea4/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  3 12:51:03 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.
Dec  3 12:51:03 np0005544501 podman[89290]: 2025-12-03 17:51:03.568501108 +0000 UTC m=+0.143352292 container init 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 12:51:03 np0005544501 ovn_controller[89305]: + sudo -E kolla_set_configs
Dec  3 12:51:03 np0005544501 podman[89290]: 2025-12-03 17:51:03.588792786 +0000 UTC m=+0.163643940 container start 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 12:51:03 np0005544501 edpm-start-podman-container[89290]: ovn_controller
Dec  3 12:51:03 np0005544501 systemd[1]: Created slice User Slice of UID 0.
Dec  3 12:51:03 np0005544501 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  3 12:51:03 np0005544501 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  3 12:51:03 np0005544501 systemd[1]: Starting User Manager for UID 0...
Dec  3 12:51:03 np0005544501 edpm-start-podman-container[89289]: Creating additional drop-in dependency for "ovn_controller" (9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753)
Dec  3 12:51:03 np0005544501 podman[89312]: 2025-12-03 17:51:03.668745839 +0000 UTC m=+0.070025285 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  3 12:51:03 np0005544501 systemd[1]: 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753-1b1ff19557a8eb8f.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 12:51:03 np0005544501 systemd[1]: 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753-1b1ff19557a8eb8f.service: Failed with result 'exit-code'.
Dec  3 12:51:03 np0005544501 systemd[1]: Reloading.
Dec  3 12:51:03 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:51:03 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:51:03 np0005544501 systemd[89344]: Queued start job for default target Main User Target.
Dec  3 12:51:03 np0005544501 systemd[89344]: Created slice User Application Slice.
Dec  3 12:51:03 np0005544501 systemd[89344]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  3 12:51:03 np0005544501 systemd[89344]: Started Daily Cleanup of User's Temporary Directories.
Dec  3 12:51:03 np0005544501 systemd[89344]: Reached target Paths.
Dec  3 12:51:03 np0005544501 systemd[89344]: Reached target Timers.
Dec  3 12:51:03 np0005544501 systemd[89344]: Starting D-Bus User Message Bus Socket...
Dec  3 12:51:03 np0005544501 systemd[89344]: Starting Create User's Volatile Files and Directories...
Dec  3 12:51:03 np0005544501 systemd[89344]: Listening on D-Bus User Message Bus Socket.
Dec  3 12:51:03 np0005544501 systemd[89344]: Reached target Sockets.
Dec  3 12:51:03 np0005544501 systemd[89344]: Finished Create User's Volatile Files and Directories.
Dec  3 12:51:03 np0005544501 systemd[89344]: Reached target Basic System.
Dec  3 12:51:03 np0005544501 systemd[89344]: Reached target Main User Target.
Dec  3 12:51:03 np0005544501 systemd[89344]: Startup finished in 120ms.
Dec  3 12:51:03 np0005544501 systemd[1]: Started User Manager for UID 0.
Dec  3 12:51:03 np0005544501 systemd[1]: Started ovn_controller container.
Dec  3 12:51:03 np0005544501 systemd[1]: Started Session c1 of User root.
Dec  3 12:51:03 np0005544501 ovn_controller[89305]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 12:51:03 np0005544501 ovn_controller[89305]: INFO:__main__:Validating config file
Dec  3 12:51:03 np0005544501 ovn_controller[89305]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 12:51:03 np0005544501 ovn_controller[89305]: INFO:__main__:Writing out command to execute
Dec  3 12:51:03 np0005544501 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  3 12:51:03 np0005544501 ovn_controller[89305]: ++ cat /run_command
Dec  3 12:51:03 np0005544501 ovn_controller[89305]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  3 12:51:03 np0005544501 ovn_controller[89305]: + ARGS=
Dec  3 12:51:03 np0005544501 ovn_controller[89305]: + sudo kolla_copy_cacerts
Dec  3 12:51:03 np0005544501 systemd[1]: Started Session c2 of User root.
Dec  3 12:51:04 np0005544501 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: + [[ ! -n '' ]]
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: + . kolla_extend_start
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: + umask 0022
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  3 12:51:04 np0005544501 NetworkManager[49087]: <info>  [1764784264.0294] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec  3 12:51:04 np0005544501 NetworkManager[49087]: <info>  [1764784264.0300] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 12:51:04 np0005544501 NetworkManager[49087]: <info>  [1764784264.0309] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  3 12:51:04 np0005544501 NetworkManager[49087]: <info>  [1764784264.0313] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec  3 12:51:04 np0005544501 NetworkManager[49087]: <info>  [1764784264.0316] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  3 12:51:04 np0005544501 kernel: br-int: entered promiscuous mode
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  3 12:51:04 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:04Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  3 12:51:04 np0005544501 NetworkManager[49087]: <info>  [1764784264.0580] manager: (ovn-2628f4-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  3 12:51:04 np0005544501 systemd-udevd[89440]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 12:51:04 np0005544501 kernel: genev_sys_6081: entered promiscuous mode
Dec  3 12:51:04 np0005544501 NetworkManager[49087]: <info>  [1764784264.0730] device (genev_sys_6081): carrier: link connected
Dec  3 12:51:04 np0005544501 NetworkManager[49087]: <info>  [1764784264.0735] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec  3 12:51:04 np0005544501 systemd-udevd[89446]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 12:51:04 np0005544501 python3.9[89574]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:04 np0005544501 ovs-vsctl[89575]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  3 12:51:05 np0005544501 python3.9[89727]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:05 np0005544501 ovs-vsctl[89729]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  3 12:51:06 np0005544501 python3.9[89882]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:06 np0005544501 ovs-vsctl[89883]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  3 12:51:06 np0005544501 systemd[1]: session-19.scope: Deactivated successfully.
Dec  3 12:51:06 np0005544501 systemd[1]: session-19.scope: Consumed 59.045s CPU time.
Dec  3 12:51:06 np0005544501 systemd-logind[784]: Session 19 logged out. Waiting for processes to exit.
Dec  3 12:51:06 np0005544501 systemd-logind[784]: Removed session 19.
Dec  3 12:51:11 np0005544501 systemd-logind[784]: New session 21 of user zuul.
Dec  3 12:51:11 np0005544501 systemd[1]: Started Session 21 of User zuul.
Dec  3 12:51:12 np0005544501 python3.9[90061]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:51:13 np0005544501 python3.9[90217]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:14 np0005544501 systemd[1]: Stopping User Manager for UID 0...
Dec  3 12:51:14 np0005544501 systemd[89344]: Activating special unit Exit the Session...
Dec  3 12:51:14 np0005544501 systemd[89344]: Stopped target Main User Target.
Dec  3 12:51:14 np0005544501 systemd[89344]: Stopped target Basic System.
Dec  3 12:51:14 np0005544501 systemd[89344]: Stopped target Paths.
Dec  3 12:51:14 np0005544501 systemd[89344]: Stopped target Sockets.
Dec  3 12:51:14 np0005544501 systemd[89344]: Stopped target Timers.
Dec  3 12:51:14 np0005544501 systemd[89344]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  3 12:51:14 np0005544501 systemd[89344]: Closed D-Bus User Message Bus Socket.
Dec  3 12:51:14 np0005544501 systemd[89344]: Stopped Create User's Volatile Files and Directories.
Dec  3 12:51:14 np0005544501 systemd[89344]: Removed slice User Application Slice.
Dec  3 12:51:14 np0005544501 systemd[89344]: Reached target Shutdown.
Dec  3 12:51:14 np0005544501 systemd[89344]: Finished Exit the Session.
Dec  3 12:51:14 np0005544501 systemd[89344]: Reached target Exit the Session.
Dec  3 12:51:14 np0005544501 systemd[1]: user@0.service: Deactivated successfully.
Dec  3 12:51:14 np0005544501 systemd[1]: Stopped User Manager for UID 0.
Dec  3 12:51:14 np0005544501 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  3 12:51:14 np0005544501 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  3 12:51:14 np0005544501 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  3 12:51:14 np0005544501 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  3 12:51:14 np0005544501 systemd[1]: Removed slice User Slice of UID 0.
Dec  3 12:51:15 np0005544501 python3.9[90384]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:51:15 np0005544501 systemd[1]: Reloading.
Dec  3 12:51:15 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:51:15 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:51:16 np0005544501 python3.9[90568]: ansible-ansible.builtin.service_facts Invoked
Dec  3 12:51:16 np0005544501 network[90585]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 12:51:16 np0005544501 network[90586]: 'network-scripts' will be removed from distribution in near future.
Dec  3 12:51:16 np0005544501 network[90587]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 12:51:19 np0005544501 python3.9[90848]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:51:20 np0005544501 python3.9[91001]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:51:21 np0005544501 python3.9[91154]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:51:22 np0005544501 python3.9[91307]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:51:23 np0005544501 python3.9[91461]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:51:23 np0005544501 python3.9[91614]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:51:24 np0005544501 python3.9[91767]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:51:25 np0005544501 python3.9[91921]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:26 np0005544501 python3.9[92073]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:27 np0005544501 python3.9[92225]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:27 np0005544501 python3.9[92377]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:28 np0005544501 python3.9[92529]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:29 np0005544501 python3.9[92681]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:29 np0005544501 python3.9[92833]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:30 np0005544501 python3.9[92985]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:31 np0005544501 python3.9[93139]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:32 np0005544501 python3.9[93291]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:33 np0005544501 python3.9[93443]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:33 np0005544501 python3.9[93595]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:33 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:33Z|00025|memory|INFO|16256 kB peak resident set size after 29.9 seconds
Dec  3 12:51:33 np0005544501 ovn_controller[89305]: 2025-12-03T17:51:33Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec  3 12:51:33 np0005544501 podman[93601]: 2025-12-03 17:51:33.968777064 +0000 UTC m=+0.127031663 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 12:51:34 np0005544501 python3.9[93773]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:35 np0005544501 python3.9[93925]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:51:36 np0005544501 python3.9[94077]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:36 np0005544501 python3.9[94229]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 12:51:38 np0005544501 python3.9[94381]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:51:38 np0005544501 systemd[1]: Reloading.
Dec  3 12:51:38 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:51:38 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:51:39 np0005544501 python3.9[94568]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:39 np0005544501 python3.9[94721]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:40 np0005544501 python3.9[94874]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:41 np0005544501 python3.9[95027]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:41 np0005544501 python3.9[95180]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:42 np0005544501 python3.9[95333]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:43 np0005544501 python3.9[95486]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:51:44 np0005544501 python3.9[95639]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  3 12:51:45 np0005544501 python3.9[95792]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 12:51:46 np0005544501 python3.9[95950]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  3 12:51:47 np0005544501 python3.9[96110]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 12:51:48 np0005544501 python3.9[96194]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 12:52:05 np0005544501 podman[96378]: 2025-12-03 17:52:05.017594274 +0000 UTC m=+0.170933808 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 12:52:20 np0005544501 kernel: SELinux:  Converting 2757 SID table entries...
Dec  3 12:52:20 np0005544501 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 12:52:20 np0005544501 kernel: SELinux:  policy capability open_perms=1
Dec  3 12:52:20 np0005544501 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 12:52:20 np0005544501 kernel: SELinux:  policy capability always_check_network=0
Dec  3 12:52:20 np0005544501 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 12:52:20 np0005544501 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 12:52:20 np0005544501 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 12:52:30 np0005544501 kernel: SELinux:  Converting 2757 SID table entries...
Dec  3 12:52:30 np0005544501 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 12:52:30 np0005544501 kernel: SELinux:  policy capability open_perms=1
Dec  3 12:52:30 np0005544501 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 12:52:30 np0005544501 kernel: SELinux:  policy capability always_check_network=0
Dec  3 12:52:30 np0005544501 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 12:52:30 np0005544501 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 12:52:30 np0005544501 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 12:52:35 np0005544501 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  3 12:52:35 np0005544501 podman[96423]: 2025-12-03 17:52:35.987361266 +0000 UTC m=+0.145893869 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller)
Dec  3 12:53:06 np0005544501 podman[111259]: 2025-12-03 17:53:06.935813574 +0000 UTC m=+0.101831743 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  3 12:53:24 np0005544501 kernel: SELinux:  Converting 2758 SID table entries...
Dec  3 12:53:24 np0005544501 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 12:53:24 np0005544501 kernel: SELinux:  policy capability open_perms=1
Dec  3 12:53:24 np0005544501 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 12:53:24 np0005544501 kernel: SELinux:  policy capability always_check_network=0
Dec  3 12:53:24 np0005544501 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 12:53:24 np0005544501 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 12:53:24 np0005544501 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 12:53:26 np0005544501 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  3 12:53:26 np0005544501 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  3 12:53:26 np0005544501 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  3 12:53:33 np0005544501 systemd[1]: Stopping OpenSSH server daemon...
Dec  3 12:53:33 np0005544501 systemd[1]: sshd.service: Deactivated successfully.
Dec  3 12:53:33 np0005544501 systemd[1]: Stopped OpenSSH server daemon.
Dec  3 12:53:33 np0005544501 systemd[1]: sshd.service: Consumed 3.725s CPU time, read 32.0K from disk, written 80.0K to disk.
Dec  3 12:53:33 np0005544501 systemd[1]: Stopped target sshd-keygen.target.
Dec  3 12:53:33 np0005544501 systemd[1]: Stopping sshd-keygen.target...
Dec  3 12:53:33 np0005544501 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 12:53:33 np0005544501 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 12:53:33 np0005544501 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 12:53:33 np0005544501 systemd[1]: Reached target sshd-keygen.target.
Dec  3 12:53:34 np0005544501 systemd[1]: Starting OpenSSH server daemon...
Dec  3 12:53:34 np0005544501 systemd[1]: Started OpenSSH server daemon.
Dec  3 12:53:36 np0005544501 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 12:53:36 np0005544501 systemd[1]: Starting man-db-cache-update.service...
Dec  3 12:53:36 np0005544501 systemd[1]: Reloading.
Dec  3 12:53:36 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:53:36 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:53:36 np0005544501 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 12:53:37 np0005544501 podman[115978]: 2025-12-03 17:53:37.961716546 +0000 UTC m=+0.126931708 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 12:53:40 np0005544501 python3.9[119487]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 12:53:41 np0005544501 systemd[1]: Reloading.
Dec  3 12:53:41 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:53:41 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:53:42 np0005544501 python3.9[120727]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 12:53:42 np0005544501 systemd[1]: Reloading.
Dec  3 12:53:42 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:53:42 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:53:43 np0005544501 python3.9[122145]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 12:53:43 np0005544501 systemd[1]: Reloading.
Dec  3 12:53:43 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:53:43 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:53:44 np0005544501 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 12:53:44 np0005544501 systemd[1]: Finished man-db-cache-update.service.
Dec  3 12:53:44 np0005544501 systemd[1]: man-db-cache-update.service: Consumed 10.128s CPU time.
Dec  3 12:53:44 np0005544501 systemd[1]: run-r36e6a1f08a5e44459239ae15201cd159.service: Deactivated successfully.
Dec  3 12:53:44 np0005544501 python3.9[123372]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 12:53:44 np0005544501 systemd[1]: Reloading.
Dec  3 12:53:44 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:53:44 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:53:45 np0005544501 python3.9[123616]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:46 np0005544501 systemd[1]: Reloading.
Dec  3 12:53:46 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:53:46 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:53:47 np0005544501 python3.9[123806]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:47 np0005544501 systemd[1]: Reloading.
Dec  3 12:53:47 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:53:47 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:53:48 np0005544501 python3.9[123996]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:48 np0005544501 systemd[1]: Reloading.
Dec  3 12:53:48 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:53:48 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:53:49 np0005544501 python3.9[124186]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:50 np0005544501 python3.9[124341]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:50 np0005544501 systemd[1]: Reloading.
Dec  3 12:53:50 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:53:50 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:53:51 np0005544501 python3.9[124532]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 12:53:52 np0005544501 systemd[1]: Reloading.
Dec  3 12:53:52 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:53:52 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:53:52 np0005544501 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  3 12:53:52 np0005544501 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  3 12:53:53 np0005544501 python3.9[124725]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:53 np0005544501 python3.9[124880]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:54 np0005544501 python3.9[125035]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:55 np0005544501 python3.9[125190]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:56 np0005544501 python3.9[125345]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:57 np0005544501 python3.9[125500]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:58 np0005544501 python3.9[125655]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:58 np0005544501 python3.9[125810]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:53:59 np0005544501 python3.9[125965]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:54:00 np0005544501 python3.9[126120]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:54:01 np0005544501 python3.9[126275]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:54:02 np0005544501 python3.9[126430]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:54:03 np0005544501 python3.9[126585]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:54:04 np0005544501 python3.9[126740]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 12:54:04 np0005544501 python3.9[126895]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:54:05 np0005544501 python3.9[127047]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:54:06 np0005544501 python3.9[127199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:54:07 np0005544501 python3.9[127351]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:54:07 np0005544501 python3.9[127503]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:54:08 np0005544501 podman[127627]: 2025-12-03 17:54:08.286773057 +0000 UTC m=+0.116856215 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 12:54:08 np0005544501 python3.9[127675]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:54:09 np0005544501 python3.9[127833]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:10 np0005544501 python3.9[127958]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764784448.6338556-554-168102636002146/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:11 np0005544501 python3.9[128110]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:11 np0005544501 python3.9[128235]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764784450.4418223-554-56372162011142/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:12 np0005544501 python3.9[128387]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:12 np0005544501 python3.9[128512]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764784451.826467-554-24490888235289/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:13 np0005544501 python3.9[128664]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:14 np0005544501 python3.9[128789]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764784453.185623-554-64400676972583/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:15 np0005544501 python3.9[128941]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:15 np0005544501 python3.9[129066]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764784454.5820875-554-80610111631360/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:17 np0005544501 python3.9[129218]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:17 np0005544501 python3.9[129343]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764784456.5054724-554-231664716222459/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:18 np0005544501 python3.9[129495]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:19 np0005544501 python3.9[129618]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764784457.9172099-554-127741035421693/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:19 np0005544501 python3.9[129770]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:20 np0005544501 python3.9[129895]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764784459.1751866-554-94358456486373/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:20 np0005544501 python3.9[130047]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  3 12:54:21 np0005544501 python3.9[130200]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:22 np0005544501 python3.9[130352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:23 np0005544501 python3.9[130504]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:23 np0005544501 python3.9[130656]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:24 np0005544501 python3.9[130808]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:25 np0005544501 python3.9[130960]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:25 np0005544501 python3.9[131112]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:26 np0005544501 python3.9[131264]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:27 np0005544501 python3.9[131416]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:28 np0005544501 python3.9[131568]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:28 np0005544501 python3.9[131720]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:29 np0005544501 python3.9[131872]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:30 np0005544501 python3.9[132024]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:30 np0005544501 python3.9[132176]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:31 np0005544501 python3.9[132328]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:32 np0005544501 python3.9[132451]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784471.0779707-775-208559919874778/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:32 np0005544501 python3.9[132603]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:33 np0005544501 python3.9[132726]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784472.4305089-775-62923134983978/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:34 np0005544501 python3.9[132878]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:34 np0005544501 python3.9[133001]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784473.6813703-775-133958812835350/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:35 np0005544501 python3.9[133153]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:36 np0005544501 python3.9[133276]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784474.9593575-775-181243057531866/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:36 np0005544501 python3.9[133428]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:37 np0005544501 python3.9[133551]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784476.2295432-775-217113415832628/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:38 np0005544501 python3.9[133703]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:38 np0005544501 podman[133798]: 2025-12-03 17:54:38.609021573 +0000 UTC m=+0.112498217 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  3 12:54:38 np0005544501 python3.9[133837]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784477.6532328-775-59502123051289/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:39 np0005544501 python3.9[134002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:40 np0005544501 python3.9[134125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784478.9168777-775-48505095537939/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:40 np0005544501 python3.9[134277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:41 np0005544501 python3.9[134400]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784480.1968906-775-155711875788229/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:41 np0005544501 python3.9[134552]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:42 np0005544501 python3.9[134675]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784481.4267201-775-77444173025030/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:43 np0005544501 python3.9[134827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:43 np0005544501 python3.9[134950]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784482.6775012-775-207940384187982/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:44 np0005544501 python3.9[135102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:45 np0005544501 python3.9[135225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784483.9322066-775-278719117526588/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:45 np0005544501 python3.9[135377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:46 np0005544501 python3.9[135500]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784485.203288-775-37749107715065/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:47 np0005544501 python3.9[135652]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:47 np0005544501 python3.9[135775]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784486.5990796-775-119552435210970/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:48 np0005544501 python3.9[135927]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:54:49 np0005544501 python3.9[136050]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784487.977226-775-272713972403102/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:49 np0005544501 python3.9[136200]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:54:50 np0005544501 python3.9[136355]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  3 12:54:52 np0005544501 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  3 12:54:52 np0005544501 python3.9[136511]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:53 np0005544501 python3.9[136663]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:54 np0005544501 python3.9[136815]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:54 np0005544501 python3.9[136967]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:55 np0005544501 python3.9[137119]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:56 np0005544501 python3.9[137271]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:56 np0005544501 python3.9[137423]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:57 np0005544501 python3.9[137575]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:58 np0005544501 python3.9[137727]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:54:59 np0005544501 python3.9[137879]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:00 np0005544501 python3.9[138031]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:55:00 np0005544501 systemd[1]: Reloading.
Dec  3 12:55:00 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:55:00 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:55:00 np0005544501 systemd[1]: Starting libvirt logging daemon socket...
Dec  3 12:55:00 np0005544501 systemd[1]: Listening on libvirt logging daemon socket.
Dec  3 12:55:00 np0005544501 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  3 12:55:00 np0005544501 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  3 12:55:00 np0005544501 systemd[1]: Starting libvirt logging daemon...
Dec  3 12:55:00 np0005544501 systemd[1]: Started libvirt logging daemon.
Dec  3 12:55:01 np0005544501 python3.9[138225]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:55:01 np0005544501 systemd[1]: Reloading.
Dec  3 12:55:01 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:55:01 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:55:01 np0005544501 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  3 12:55:01 np0005544501 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  3 12:55:01 np0005544501 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  3 12:55:01 np0005544501 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  3 12:55:01 np0005544501 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  3 12:55:01 np0005544501 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  3 12:55:01 np0005544501 systemd[1]: Starting libvirt nodedev daemon...
Dec  3 12:55:01 np0005544501 systemd[1]: Started libvirt nodedev daemon.
Dec  3 12:55:02 np0005544501 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  3 12:55:02 np0005544501 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  3 12:55:02 np0005544501 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  3 12:55:02 np0005544501 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  3 12:55:02 np0005544501 python3.9[138442]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:55:02 np0005544501 systemd[1]: Reloading.
Dec  3 12:55:02 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:55:02 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:55:02 np0005544501 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  3 12:55:02 np0005544501 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  3 12:55:02 np0005544501 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  3 12:55:02 np0005544501 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  3 12:55:02 np0005544501 systemd[1]: Starting libvirt proxy daemon...
Dec  3 12:55:02 np0005544501 systemd[1]: Started libvirt proxy daemon.
Dec  3 12:55:03 np0005544501 setroubleshoot[138366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5d4d6a5a-ca46-42c3-a497-da3d96985cd0
Dec  3 12:55:03 np0005544501 setroubleshoot[138366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  3 12:55:03 np0005544501 setroubleshoot[138366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5d4d6a5a-ca46-42c3-a497-da3d96985cd0
Dec  3 12:55:03 np0005544501 setroubleshoot[138366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  3 12:55:03 np0005544501 python3.9[138661]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:55:03 np0005544501 systemd[1]: Reloading.
Dec  3 12:55:03 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:55:03 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:55:03 np0005544501 systemd[1]: Listening on libvirt locking daemon socket.
Dec  3 12:55:04 np0005544501 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  3 12:55:04 np0005544501 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  3 12:55:04 np0005544501 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  3 12:55:04 np0005544501 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  3 12:55:04 np0005544501 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  3 12:55:04 np0005544501 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  3 12:55:04 np0005544501 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  3 12:55:04 np0005544501 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  3 12:55:04 np0005544501 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  3 12:55:04 np0005544501 systemd[1]: Starting libvirt QEMU daemon...
Dec  3 12:55:04 np0005544501 systemd[1]: Started libvirt QEMU daemon.
Dec  3 12:55:04 np0005544501 python3.9[138877]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:55:04 np0005544501 systemd[1]: Reloading.
Dec  3 12:55:04 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:55:04 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:55:05 np0005544501 systemd[1]: Starting libvirt secret daemon socket...
Dec  3 12:55:05 np0005544501 systemd[1]: Listening on libvirt secret daemon socket.
Dec  3 12:55:05 np0005544501 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  3 12:55:05 np0005544501 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  3 12:55:05 np0005544501 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  3 12:55:05 np0005544501 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  3 12:55:05 np0005544501 systemd[1]: Starting libvirt secret daemon...
Dec  3 12:55:05 np0005544501 systemd[1]: Started libvirt secret daemon.
Dec  3 12:55:06 np0005544501 python3.9[139089]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:06 np0005544501 python3.9[139241]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 12:55:07 np0005544501 python3.9[139393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:08 np0005544501 python3.9[139516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784507.2655494-1120-155022879746649/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:09 np0005544501 podman[139542]: 2025-12-03 17:55:09.028551562 +0000 UTC m=+0.184114125 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller)
Dec  3 12:55:09 np0005544501 python3.9[139692]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:10 np0005544501 python3.9[139844]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:10 np0005544501 python3.9[139922]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:11 np0005544501 python3.9[140074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:11 np0005544501 python3.9[140152]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ekbqq1mc recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:12 np0005544501 python3.9[140304]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:13 np0005544501 python3.9[140382]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:13 np0005544501 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  3 12:55:13 np0005544501 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  3 12:55:13 np0005544501 python3.9[140534]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:55:14 np0005544501 python3[140687]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 12:55:15 np0005544501 python3.9[140839]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:16 np0005544501 python3.9[140917]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:17 np0005544501 python3.9[141069]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:17 np0005544501 python3.9[141147]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:18 np0005544501 python3.9[141299]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:18 np0005544501 python3.9[141377]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:19 np0005544501 python3.9[141529]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:20 np0005544501 python3.9[141607]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:20 np0005544501 python3.9[141759]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:21 np0005544501 python3.9[141884]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784520.3186655-1245-173345680538493/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:22 np0005544501 python3.9[142036]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:23 np0005544501 python3.9[142188]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:55:23 np0005544501 python3.9[142343]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:24 np0005544501 python3.9[142495]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:55:25 np0005544501 python3.9[142648]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:55:26 np0005544501 python3.9[142802]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:55:27 np0005544501 python3.9[142957]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:27 np0005544501 python3.9[143109]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:28 np0005544501 python3.9[143232]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784527.348562-1317-225444930133308/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:29 np0005544501 python3.9[143384]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:29 np0005544501 python3.9[143507]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784528.6657531-1332-28433265779247/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:30 np0005544501 python3.9[143659]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:31 np0005544501 python3.9[143782]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784530.059072-1347-239144870383439/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:32 np0005544501 python3.9[143934]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:55:32 np0005544501 systemd[1]: Reloading.
Dec  3 12:55:32 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:55:32 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:55:32 np0005544501 systemd[1]: Reached target edpm_libvirt.target.
Dec  3 12:55:33 np0005544501 python3.9[144125]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  3 12:55:33 np0005544501 systemd[1]: Reloading.
Dec  3 12:55:33 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:55:33 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:55:33 np0005544501 systemd[1]: Reloading.
Dec  3 12:55:33 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:55:33 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:55:34 np0005544501 systemd[1]: session-21.scope: Deactivated successfully.
Dec  3 12:55:34 np0005544501 systemd[1]: session-21.scope: Consumed 3min 27.528s CPU time.
Dec  3 12:55:34 np0005544501 systemd-logind[784]: Session 21 logged out. Waiting for processes to exit.
Dec  3 12:55:34 np0005544501 systemd-logind[784]: Removed session 21.
Dec  3 12:55:39 np0005544501 systemd-logind[784]: New session 22 of user zuul.
Dec  3 12:55:39 np0005544501 systemd[1]: Started Session 22 of User zuul.
Dec  3 12:55:39 np0005544501 podman[144224]: 2025-12-03 17:55:39.296652298 +0000 UTC m=+0.133070001 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 12:55:40 np0005544501 python3.9[144401]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 12:55:41 np0005544501 python3.9[144557]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:55:41 np0005544501 systemd[1]: Reloading.
Dec  3 12:55:42 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:55:42 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:55:43 np0005544501 python3.9[144742]: ansible-ansible.builtin.service_facts Invoked
Dec  3 12:55:43 np0005544501 network[144759]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 12:55:43 np0005544501 network[144760]: 'network-scripts' will be removed from distribution in near future.
Dec  3 12:55:43 np0005544501 network[144761]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 12:55:48 np0005544501 python3.9[145032]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:55:49 np0005544501 python3.9[145185]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:50 np0005544501 python3.9[145337]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:55:51 np0005544501 python3.9[145489]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:55:52 np0005544501 python3.9[145641]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 12:55:52 np0005544501 python3.9[145793]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:55:52 np0005544501 systemd[1]: Reloading.
Dec  3 12:55:53 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:55:53 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:55:53 np0005544501 python3.9[145981]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:55:54 np0005544501 python3.9[146134]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:55:55 np0005544501 python3.9[146284]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:55:56 np0005544501 python3.9[146436]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:55:57 np0005544501 python3.9[146557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784555.822359-133-20538095941079/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:55:58 np0005544501 python3.9[146709]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec  3 12:55:59 np0005544501 python3.9[146863]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  3 12:55:59 np0005544501 python3.9[147016]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 12:56:00 np0005544501 python3.9[147174]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  3 12:56:02 np0005544501 python3.9[147332]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:03 np0005544501 python3.9[147453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764784562.05611-201-159752340878172/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:03 np0005544501 python3.9[147603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:04 np0005544501 python3.9[147724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764784563.2862043-201-161584532382319/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:05 np0005544501 python3.9[147874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:05 np0005544501 python3.9[147995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764784564.5294137-201-214490760490626/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:06 np0005544501 python3.9[148145]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:56:07 np0005544501 python3.9[148297]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:56:07 np0005544501 python3.9[148449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:08 np0005544501 python3.9[148570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784567.3070023-260-18085257016590/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:09 np0005544501 python3.9[148720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:09 np0005544501 podman[148770]: 2025-12-03 17:56:09.571991778 +0000 UTC m=+0.118831827 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 12:56:09 np0005544501 python3.9[148806]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:10 np0005544501 python3.9[148970]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:11 np0005544501 python3.9[149091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784569.9234245-260-70433161758408/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:11 np0005544501 python3.9[149241]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:12 np0005544501 python3.9[149362]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784571.1900992-260-191189307605635/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:12 np0005544501 python3.9[149512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:13 np0005544501 python3.9[149633]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784572.4145565-260-4033660881879/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:14 np0005544501 python3.9[149783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:14 np0005544501 python3.9[149904]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784573.6614683-260-119596072846129/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:15 np0005544501 python3.9[150054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:16 np0005544501 python3.9[150175]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784574.9441946-260-112053989662360/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:16 np0005544501 python3.9[150325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:17 np0005544501 python3.9[150446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784576.1806557-260-121689308830635/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:18 np0005544501 python3.9[150596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:18 np0005544501 python3.9[150717]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784577.4927502-260-157352211381795/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:19 np0005544501 python3.9[150867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:19 np0005544501 python3.9[150988]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784578.7111647-260-232751301677813/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:20 np0005544501 python3.9[151138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:21 np0005544501 python3.9[151259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784579.9937663-260-32176537404311/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:21 np0005544501 python3.9[151409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:22 np0005544501 python3.9[151485]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:23 np0005544501 python3.9[151635]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:23 np0005544501 python3.9[151711]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:24 np0005544501 python3.9[151861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:24 np0005544501 python3.9[151937]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:25 np0005544501 python3.9[152089]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:26 np0005544501 python3.9[152241]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:27 np0005544501 python3.9[152393]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:56:27 np0005544501 python3.9[152545]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:56:28 np0005544501 systemd[1]: Reloading.
Dec  3 12:56:28 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:56:28 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:56:28 np0005544501 systemd[1]: Listening on Podman API Socket.
Dec  3 12:56:29 np0005544501 python3.9[152735]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:29 np0005544501 python3.9[152858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784588.688457-482-41338470744744/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:56:30 np0005544501 python3.9[152934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:56:30 np0005544501 python3.9[153057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784588.688457-482-41338470744744/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:56:31 np0005544501 python3.9[153209]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec  3 12:56:32 np0005544501 python3.9[153361]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 12:56:34 np0005544501 python3[153513]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 12:56:39 np0005544501 podman[153562]: 2025-12-03 17:56:39.918323046 +0000 UTC m=+0.084142812 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 12:56:51 np0005544501 podman[153525]: 2025-12-03 17:56:51.480166153 +0000 UTC m=+17.372307734 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  3 12:56:51 np0005544501 podman[153689]: 2025-12-03 17:56:51.645086831 +0000 UTC m=+0.052139493 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  3 12:56:51 np0005544501 podman[153689]: 2025-12-03 17:56:51.806329766 +0000 UTC m=+0.213382438 container create ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 12:56:51 np0005544501 python3[153513]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec  3 12:56:52 np0005544501 python3.9[153877]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:56:53 np0005544501 python3.9[154031]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:54 np0005544501 python3.9[154182]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784613.540716-546-212836112869443/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:56:54 np0005544501 python3.9[154258]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:56:54 np0005544501 systemd[1]: Reloading.
Dec  3 12:56:55 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:56:55 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:56:55 np0005544501 python3.9[154369]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:56:55 np0005544501 systemd[1]: Reloading.
Dec  3 12:56:56 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:56:56 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:56:56 np0005544501 systemd[1]: Starting ceilometer_agent_compute container...
Dec  3 12:56:56 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:56:56 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 12:56:56 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 12:56:56 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  3 12:56:56 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  3 12:56:56 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.
Dec  3 12:56:56 np0005544501 podman[154409]: 2025-12-03 17:56:56.339810031 +0000 UTC m=+0.129098534 container init ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: + sudo -E kolla_set_configs
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: sudo: unable to send audit message: Operation not permitted
Dec  3 12:56:56 np0005544501 podman[154409]: 2025-12-03 17:56:56.375983775 +0000 UTC m=+0.165272268 container start ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 12:56:56 np0005544501 podman[154409]: ceilometer_agent_compute
Dec  3 12:56:56 np0005544501 systemd[1]: Started ceilometer_agent_compute container.
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Validating config file
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Copying service configuration files
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: INFO:__main__:Writing out command to execute
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: ++ cat /run_command
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: + ARGS=
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: + sudo kolla_copy_cacerts
Dec  3 12:56:56 np0005544501 podman[154430]: 2025-12-03 17:56:56.442144676 +0000 UTC m=+0.054249595 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 12:56:56 np0005544501 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-327c493a6d1a2959.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 12:56:56 np0005544501 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-327c493a6d1a2959.service: Failed with result 'exit-code'.
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: sudo: unable to send audit message: Operation not permitted
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: + [[ ! -n '' ]]
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: + . kolla_extend_start
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: + umask 0022
Dec  3 12:56:56 np0005544501 ceilometer_agent_compute[154423]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  3 12:56:57 np0005544501 python3.9[154605]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.219 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.220 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.220 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.220 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.220 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.220 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.220 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.220 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.220 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.220 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.220 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.221 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.221 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.221 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.221 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.221 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.221 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.221 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.221 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.221 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.221 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.221 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.222 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.223 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.224 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.225 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.228 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.228 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.228 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.228 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.228 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.228 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.228 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.228 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.228 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.229 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.229 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.229 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.229 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.229 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.229 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.229 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.229 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.229 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.232 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.232 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.233 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.233 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.233 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.233 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.233 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.233 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.233 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.253 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.254 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.254 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.254 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.254 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.254 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.254 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.254 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.254 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.254 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.255 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.256 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.257 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.258 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.259 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.260 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.264 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.264 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.264 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.265 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  3 12:56:57 np0005544501 systemd[1]: Stopping ceilometer_agent_compute container...
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.267 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.268 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.479 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.490 15 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.491 15 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.491 15 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.622 15 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.622 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.622 15 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.622 15 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.623 15 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.623 15 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.623 15 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.623 15 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.623 15 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.623 15 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.623 15 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.623 15 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.624 15 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.624 15 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.624 15 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.624 15 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.624 15 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.624 15 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.625 15 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.625 15 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.625 15 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.625 15 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.625 15 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.625 15 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.626 15 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.626 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.626 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.626 15 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.626 15 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.626 15 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.626 15 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.626 15 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.627 15 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.627 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.627 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.627 15 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.627 15 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.627 15 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.627 15 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.627 15 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.628 15 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.628 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.628 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.628 15 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.628 15 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.628 15 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.628 15 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.628 15 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.629 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.629 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.629 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.629 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.629 15 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.629 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.629 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.630 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.630 15 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.630 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.630 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.630 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.630 15 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.630 15 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.630 15 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.631 15 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.631 15 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.631 15 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.631 15 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.631 15 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.631 15 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.631 15 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.631 15 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.632 15 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.632 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.632 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.632 15 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.632 15 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.632 15 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.632 15 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.632 15 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.633 15 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.633 15 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.633 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.633 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.633 15 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.633 15 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.633 15 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.633 15 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.634 15 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.634 15 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.634 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.634 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.634 15 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.634 15 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.634 15 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.635 15 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.635 15 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.635 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.635 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.635 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.635 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.635 15 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.635 15 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.635 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.636 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.636 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.636 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.636 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.636 15 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.636 15 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.636 15 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.636 15 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.636 15 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.637 15 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.637 15 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.637 15 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.637 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.637 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.637 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.637 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.637 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.637 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.637 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.638 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.639 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.639 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.639 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.639 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.639 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.639 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.639 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.639 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.639 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.639 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.640 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.640 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.640 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.640 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.640 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.640 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.640 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.640 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.641 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.641 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.641 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.641 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.641 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.641 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.641 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.641 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.641 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.642 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.642 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.642 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.642 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.642 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.642 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.642 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.642 15 DEBUG cotyledon._service [-] Run service AgentManager(0) [15] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.645 15 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.658 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.658 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.658 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e284050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.659 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fda2e2875c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.659 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e286060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.659 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.660 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e285880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.660 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e2858e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.660 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e2851c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.660 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e2851f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.660 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e285250>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.661 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e2852b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.661 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e285310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.661 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e285b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.661 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e286330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.661 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e285b50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.662 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e285370>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.662 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e2853d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.662 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e285be0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.662 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e285430>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.663 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e287c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.663 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e2874a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.663 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e287500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.663 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e287590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.663 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e2855b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.663 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e285670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.663 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e2856a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.663 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e2846e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.663 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e285730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.663 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fda2e2857c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fda2dde7920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.665 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.665 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fda2e285e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.665 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.665 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fda2e285460>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.666 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.666 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fda2e2858b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.666 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.666 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fda2e2841d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.666 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.667 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fda2e285100>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.667 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.667 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fda2e285220>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.667 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.667 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fda2e285280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.667 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.668 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fda2e2852e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.668 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.668 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.668 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fda2e285af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.668 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.669 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fda2e286300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.669 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.669 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fda2e285b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.669 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.669 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fda2e285340>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.669 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.670 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fda2e2853a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.670 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.670 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fda30ae9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.670 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.670 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fda2e285400>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.670 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.671 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fda2e287c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.671 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.671 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fda2e2854c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.671 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.672 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fda2e2874d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.672 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.672 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fda310f59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.672 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.672 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fda2e285610>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.673 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.673 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fda2e287530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.673 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.673 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fda2e285700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.673 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.673 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fda2e284ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.674 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.674 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fda2e285bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.674 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.674 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fda2e285490>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fda2f3fb980>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.674 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.675 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.675 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.675 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.675 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.675 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.675 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.675 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.675 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.675 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.676 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.677 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.677 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.677 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.769 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.769 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.769 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.769 15 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [15]
Dec  3 12:56:57 np0005544501 virtqemud[138705]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  3 12:56:57 np0005544501 virtqemud[138705]: hostname: compute-0
Dec  3 12:56:57 np0005544501 virtqemud[138705]: End of file while reading data: Input/output error
Dec  3 12:56:57 np0005544501 virtqemud[138705]: End of file while reading data: Input/output error
Dec  3 12:56:57 np0005544501 ceilometer_agent_compute[154423]: 2025-12-03 17:56:57.784 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec  3 12:56:57 np0005544501 systemd[1]: libpod-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope: Deactivated successfully.
Dec  3 12:56:57 np0005544501 systemd[1]: libpod-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope: Consumed 1.533s CPU time.
Dec  3 12:56:57 np0005544501 podman[154612]: 2025-12-03 17:56:57.985951 +0000 UTC m=+0.702599822 container died ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, tcib_managed=true)
Dec  3 12:56:57 np0005544501 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-327c493a6d1a2959.timer: Deactivated successfully.
Dec  3 12:56:57 np0005544501 systemd[1]: Stopped /usr/bin/podman healthcheck run ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.
Dec  3 12:56:58 np0005544501 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-userdata-shm.mount: Deactivated successfully.
Dec  3 12:56:58 np0005544501 systemd[1]: var-lib-containers-storage-overlay-38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f-merged.mount: Deactivated successfully.
Dec  3 12:57:01 np0005544501 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  3 12:57:02 np0005544501 podman[154612]: 2025-12-03 17:57:02.269358522 +0000 UTC m=+4.986007324 container cleanup ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Dec  3 12:57:02 np0005544501 podman[154612]: ceilometer_agent_compute
Dec  3 12:57:02 np0005544501 podman[154654]: ceilometer_agent_compute
Dec  3 12:57:02 np0005544501 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec  3 12:57:02 np0005544501 systemd[1]: Stopped ceilometer_agent_compute container.
Dec  3 12:57:02 np0005544501 systemd[1]: Starting ceilometer_agent_compute container...
Dec  3 12:57:02 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:57:02 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:02 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:02 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:02 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:02 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.
Dec  3 12:57:02 np0005544501 podman[154667]: 2025-12-03 17:57:02.500078442 +0000 UTC m=+0.122204302 container init ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: + sudo -E kolla_set_configs
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: sudo: unable to send audit message: Operation not permitted
Dec  3 12:57:02 np0005544501 podman[154667]: 2025-12-03 17:57:02.534896711 +0000 UTC m=+0.157022561 container start ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  3 12:57:02 np0005544501 podman[154667]: ceilometer_agent_compute
Dec  3 12:57:02 np0005544501 systemd[1]: Started ceilometer_agent_compute container.
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Validating config file
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Copying service configuration files
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: INFO:__main__:Writing out command to execute
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: ++ cat /run_command
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: + ARGS=
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: + sudo kolla_copy_cacerts
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: sudo: unable to send audit message: Operation not permitted
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: + [[ ! -n '' ]]
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: + . kolla_extend_start
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: + umask 0022
Dec  3 12:57:02 np0005544501 ceilometer_agent_compute[154682]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  3 12:57:02 np0005544501 podman[154689]: 2025-12-03 17:57:02.616270613 +0000 UTC m=+0.049784664 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  3 12:57:02 np0005544501 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-5faf5f5157d672c5.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 12:57:02 np0005544501 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-5faf5f5157d672c5.service: Failed with result 'exit-code'.
Dec  3 12:57:03 np0005544501 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 12:57:03 np0005544501 python3.9[154864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.441 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.441 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.441 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.441 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.442 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.442 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.442 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.442 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.442 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.442 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.442 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.442 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.442 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.442 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.443 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.443 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.443 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.443 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.443 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.443 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.443 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.443 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.443 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.443 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.444 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.445 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.446 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.447 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.448 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.449 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.450 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.453 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.453 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.454 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.454 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.454 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.454 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.454 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.454 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.473 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.474 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.474 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.474 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.474 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.474 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.474 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.474 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.474 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.474 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.474 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.475 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.475 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.475 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.475 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.475 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.475 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.475 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.475 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.475 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.475 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.475 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.476 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.477 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.478 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.479 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.480 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.480 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.480 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.480 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.480 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.480 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.480 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.480 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.480 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.480 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.480 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.481 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.482 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.483 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.484 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.485 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.486 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.486 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.486 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.486 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.486 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.486 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.486 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.488 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.490 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.491 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.499 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.508 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.508 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.509 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.658 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.659 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.659 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.659 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.659 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.659 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.660 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.660 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.660 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.660 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.660 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.660 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.661 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.661 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.661 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.661 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.662 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.662 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.662 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.662 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.662 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.662 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.663 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.663 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.663 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.663 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.663 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.663 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.663 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.664 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.664 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.664 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.664 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.664 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.664 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.664 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.664 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.664 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.664 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.665 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.665 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.665 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.665 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.665 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.665 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.666 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.666 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.666 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.666 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.666 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.666 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.667 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.667 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.667 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.667 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.667 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.667 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.667 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.667 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.668 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.668 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.668 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.668 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.668 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.668 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.668 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.668 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.668 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.669 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.669 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.669 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.669 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.669 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.669 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.669 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.669 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.669 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.670 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.670 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.670 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.670 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.670 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.670 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.670 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.670 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.671 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.671 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.672 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.672 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.672 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.672 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.672 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.672 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.672 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.672 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.672 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.672 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.673 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.673 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.673 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.673 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.673 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.673 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.673 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.673 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.674 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.674 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.674 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.675 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.676 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.677 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.677 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.677 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.677 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.677 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.677 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.677 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.677 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.678 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.678 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.678 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.678 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.678 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.678 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.678 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.678 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.678 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.679 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.679 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.679 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.679 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.679 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.681 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.693 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.694 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.694 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.695 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.695 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.698 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.699 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.699 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510b4770>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:57:03.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:57:03 np0005544501 python3.9[155000]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784622.8007603-578-190356412862420/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:57:04 np0005544501 python3.9[155152]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec  3 12:57:05 np0005544501 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 12:57:05 np0005544501 python3.9[155305]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 12:57:06 np0005544501 python3[155457]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 12:57:07 np0005544501 podman[155470]: 2025-12-03 17:57:07.579670141 +0000 UTC m=+1.109719837 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  3 12:57:07 np0005544501 podman[155566]: 2025-12-03 17:57:07.717351489 +0000 UTC m=+0.027580940 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  3 12:57:07 np0005544501 podman[155566]: 2025-12-03 17:57:07.848675197 +0000 UTC m=+0.158904518 container create f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 12:57:07 np0005544501 python3[155457]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec  3 12:57:08 np0005544501 python3.9[155756]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:57:09 np0005544501 python3.9[155910]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:10 np0005544501 podman[156033]: 2025-12-03 17:57:10.181346039 +0000 UTC m=+0.101402298 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Dec  3 12:57:10 np0005544501 python3.9[156081]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784629.6343193-631-193712324028439/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:10 np0005544501 python3.9[156163]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:57:10 np0005544501 systemd[1]: Reloading.
Dec  3 12:57:11 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:57:11 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:57:11 np0005544501 python3.9[156275]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:57:11 np0005544501 systemd[1]: Reloading.
Dec  3 12:57:12 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:57:12 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:57:12 np0005544501 systemd[1]: Starting node_exporter container...
Dec  3 12:57:12 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:57:12 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05fe21bac130252b072c55040b283de57713836ce9b724e01eec43e52bbf128/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:12 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05fe21bac130252b072c55040b283de57713836ce9b724e01eec43e52bbf128/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:12 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.
Dec  3 12:57:12 np0005544501 podman[156314]: 2025-12-03 17:57:12.467497778 +0000 UTC m=+0.150420588 container init f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.490Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.490Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.490Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.491Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.491Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.492Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.492Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.492Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.492Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=arp
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=bcache
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=bonding
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=cpu
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=edac
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=filefd
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=netclass
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=netdev
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=netstat
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=nfs
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=nvme
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=softnet
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=systemd
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=xfs
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.493Z caller=node_exporter.go:117 level=info collector=zfs
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.494Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  3 12:57:12 np0005544501 node_exporter[156330]: ts=2025-12-03T17:57:12.495Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  3 12:57:12 np0005544501 podman[156314]: 2025-12-03 17:57:12.498343472 +0000 UTC m=+0.181266272 container start f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 12:57:12 np0005544501 podman[156314]: node_exporter
Dec  3 12:57:12 np0005544501 systemd[1]: Started node_exporter container.
Dec  3 12:57:12 np0005544501 podman[156339]: 2025-12-03 17:57:12.630213195 +0000 UTC m=+0.111187534 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 12:57:13 np0005544501 python3.9[156514]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:57:13 np0005544501 systemd[1]: Stopping node_exporter container...
Dec  3 12:57:13 np0005544501 systemd[1]: libpod-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.scope: Deactivated successfully.
Dec  3 12:57:13 np0005544501 podman[156518]: 2025-12-03 17:57:13.536596816 +0000 UTC m=+0.057373752 container died f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 12:57:13 np0005544501 systemd[1]: f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58-4c680a8888252df8.timer: Deactivated successfully.
Dec  3 12:57:13 np0005544501 systemd[1]: Stopped /usr/bin/podman healthcheck run f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.
Dec  3 12:57:13 np0005544501 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58-userdata-shm.mount: Deactivated successfully.
Dec  3 12:57:13 np0005544501 systemd[1]: var-lib-containers-storage-overlay-f05fe21bac130252b072c55040b283de57713836ce9b724e01eec43e52bbf128-merged.mount: Deactivated successfully.
Dec  3 12:57:13 np0005544501 podman[156518]: 2025-12-03 17:57:13.75497889 +0000 UTC m=+0.275755766 container cleanup f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 12:57:13 np0005544501 podman[156518]: node_exporter
Dec  3 12:57:13 np0005544501 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  3 12:57:13 np0005544501 podman[156547]: node_exporter
Dec  3 12:57:13 np0005544501 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec  3 12:57:13 np0005544501 systemd[1]: Stopped node_exporter container.
Dec  3 12:57:13 np0005544501 systemd[1]: Starting node_exporter container...
Dec  3 12:57:13 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:57:13 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05fe21bac130252b072c55040b283de57713836ce9b724e01eec43e52bbf128/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:13 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05fe21bac130252b072c55040b283de57713836ce9b724e01eec43e52bbf128/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:13 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.
Dec  3 12:57:13 np0005544501 podman[156560]: 2025-12-03 17:57:13.97842172 +0000 UTC m=+0.127925193 container init f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.993Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.993Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.993Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.994Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.994Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=arp
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=bcache
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=bonding
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=cpu
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=edac
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=filefd
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=netclass
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=netdev
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=netstat
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=nfs
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=nvme
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=softnet
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=systemd
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=xfs
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.995Z caller=node_exporter.go:117 level=info collector=zfs
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.996Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  3 12:57:13 np0005544501 node_exporter[156575]: ts=2025-12-03T17:57:13.997Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  3 12:57:14 np0005544501 podman[156560]: 2025-12-03 17:57:14.008403564 +0000 UTC m=+0.157907017 container start f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 12:57:14 np0005544501 podman[156560]: node_exporter
Dec  3 12:57:14 np0005544501 systemd[1]: Started node_exporter container.
Dec  3 12:57:14 np0005544501 podman[156584]: 2025-12-03 17:57:14.108595369 +0000 UTC m=+0.092246457 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 12:57:14 np0005544501 python3.9[156759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:57:15 np0005544501 python3.9[156882]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784634.2820108-663-219615375340984/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:57:16 np0005544501 python3.9[157034]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec  3 12:57:17 np0005544501 python3.9[157186]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 12:57:18 np0005544501 python3[157338]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 12:57:19 np0005544501 podman[157351]: 2025-12-03 17:57:19.403374142 +0000 UTC m=+1.208937939 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  3 12:57:19 np0005544501 podman[157450]: 2025-12-03 17:57:19.582261905 +0000 UTC m=+0.061498926 container create 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible)
Dec  3 12:57:19 np0005544501 podman[157450]: 2025-12-03 17:57:19.552121757 +0000 UTC m=+0.031358798 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  3 12:57:19 np0005544501 python3[157338]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec  3 12:57:20 np0005544501 python3.9[157641]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:57:21 np0005544501 python3.9[157795]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:22 np0005544501 python3.9[157946]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784641.3394494-716-217248757571533/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:22 np0005544501 python3.9[158022]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:57:22 np0005544501 systemd[1]: Reloading.
Dec  3 12:57:22 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:57:22 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:57:23 np0005544501 python3.9[158133]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:57:23 np0005544501 systemd[1]: Reloading.
Dec  3 12:57:23 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:57:23 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:57:23 np0005544501 systemd[1]: Starting podman_exporter container...
Dec  3 12:57:24 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:57:24 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131c842645800969eb0853465c1a78b211e06c4c36238e3b540cce4379df0509/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:24 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131c842645800969eb0853465c1a78b211e06c4c36238e3b540cce4379df0509/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:24 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.
Dec  3 12:57:24 np0005544501 podman[158172]: 2025-12-03 17:57:24.051666481 +0000 UTC m=+0.128240202 container init 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 12:57:24 np0005544501 podman_exporter[158188]: ts=2025-12-03T17:57:24.069Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  3 12:57:24 np0005544501 podman_exporter[158188]: ts=2025-12-03T17:57:24.069Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  3 12:57:24 np0005544501 podman_exporter[158188]: ts=2025-12-03T17:57:24.069Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  3 12:57:24 np0005544501 podman_exporter[158188]: ts=2025-12-03T17:57:24.069Z caller=handler.go:105 level=info collector=container
Dec  3 12:57:24 np0005544501 podman[158172]: 2025-12-03 17:57:24.082191628 +0000 UTC m=+0.158765319 container start 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 12:57:24 np0005544501 podman[158172]: podman_exporter
Dec  3 12:57:24 np0005544501 systemd[1]: Starting Podman API Service...
Dec  3 12:57:24 np0005544501 systemd[1]: Started Podman API Service.
Dec  3 12:57:24 np0005544501 systemd[1]: Started podman_exporter container.
Dec  3 12:57:24 np0005544501 podman[158200]: time="2025-12-03T17:57:24Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec  3 12:57:24 np0005544501 podman[158200]: time="2025-12-03T17:57:24Z" level=info msg="Setting parallel job count to 25"
Dec  3 12:57:24 np0005544501 podman[158200]: time="2025-12-03T17:57:24Z" level=info msg="Using sqlite as database backend"
Dec  3 12:57:24 np0005544501 podman[158200]: time="2025-12-03T17:57:24Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec  3 12:57:24 np0005544501 podman[158200]: time="2025-12-03T17:57:24Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec  3 12:57:24 np0005544501 podman[158200]: time="2025-12-03T17:57:24Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec  3 12:57:24 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:57:24 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  3 12:57:24 np0005544501 podman[158200]: time="2025-12-03T17:57:24Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 12:57:24 np0005544501 podman[158197]: 2025-12-03 17:57:24.158949844 +0000 UTC m=+0.067522166 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 12:57:24 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:57:24 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9686 "" "Go-http-client/1.1"
Dec  3 12:57:24 np0005544501 podman_exporter[158188]: ts=2025-12-03T17:57:24.166Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  3 12:57:24 np0005544501 podman_exporter[158188]: ts=2025-12-03T17:57:24.167Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  3 12:57:24 np0005544501 podman_exporter[158188]: ts=2025-12-03T17:57:24.168Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  3 12:57:24 np0005544501 systemd[1]: 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-dea3e2431c79021.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 12:57:24 np0005544501 systemd[1]: 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-dea3e2431c79021.service: Failed with result 'exit-code'.
Dec  3 12:57:25 np0005544501 python3.9[158386]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:57:25 np0005544501 systemd[1]: Stopping podman_exporter container...
Dec  3 12:57:25 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:57:24 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec  3 12:57:25 np0005544501 systemd[1]: libpod-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope: Deactivated successfully.
Dec  3 12:57:25 np0005544501 conmon[158188]: conmon 6e1c01fe8e4aba399d56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope/container/memory.events
Dec  3 12:57:25 np0005544501 podman[158390]: 2025-12-03 17:57:25.113493025 +0000 UTC m=+0.047700369 container died 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 12:57:25 np0005544501 systemd[1]: 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-dea3e2431c79021.timer: Deactivated successfully.
Dec  3 12:57:25 np0005544501 systemd[1]: Stopped /usr/bin/podman healthcheck run 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.
Dec  3 12:57:25 np0005544501 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-userdata-shm.mount: Deactivated successfully.
Dec  3 12:57:25 np0005544501 systemd[1]: var-lib-containers-storage-overlay-131c842645800969eb0853465c1a78b211e06c4c36238e3b540cce4379df0509-merged.mount: Deactivated successfully.
Dec  3 12:57:25 np0005544501 podman[158390]: 2025-12-03 17:57:25.46332688 +0000 UTC m=+0.397534204 container cleanup 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 12:57:25 np0005544501 podman[158390]: podman_exporter
Dec  3 12:57:25 np0005544501 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  3 12:57:25 np0005544501 podman[158416]: podman_exporter
Dec  3 12:57:25 np0005544501 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec  3 12:57:25 np0005544501 systemd[1]: Stopped podman_exporter container.
Dec  3 12:57:25 np0005544501 systemd[1]: Starting podman_exporter container...
Dec  3 12:57:25 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:57:25 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131c842645800969eb0853465c1a78b211e06c4c36238e3b540cce4379df0509/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:25 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131c842645800969eb0853465c1a78b211e06c4c36238e3b540cce4379df0509/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:25 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.
Dec  3 12:57:25 np0005544501 podman[158428]: 2025-12-03 17:57:25.747567118 +0000 UTC m=+0.156595223 container init 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 12:57:25 np0005544501 podman_exporter[158443]: ts=2025-12-03T17:57:25.769Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  3 12:57:25 np0005544501 podman_exporter[158443]: ts=2025-12-03T17:57:25.769Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  3 12:57:25 np0005544501 podman_exporter[158443]: ts=2025-12-03T17:57:25.770Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  3 12:57:25 np0005544501 podman_exporter[158443]: ts=2025-12-03T17:57:25.770Z caller=handler.go:105 level=info collector=container
Dec  3 12:57:25 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:57:25 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  3 12:57:25 np0005544501 podman[158200]: time="2025-12-03T17:57:25Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 12:57:25 np0005544501 podman[158428]: 2025-12-03 17:57:25.774133205 +0000 UTC m=+0.183161280 container start 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 12:57:25 np0005544501 podman[158428]: podman_exporter
Dec  3 12:57:25 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:57:25 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9688 "" "Go-http-client/1.1"
Dec  3 12:57:25 np0005544501 podman_exporter[158443]: ts=2025-12-03T17:57:25.788Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  3 12:57:25 np0005544501 podman_exporter[158443]: ts=2025-12-03T17:57:25.789Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  3 12:57:25 np0005544501 podman_exporter[158443]: ts=2025-12-03T17:57:25.789Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  3 12:57:25 np0005544501 systemd[1]: Started podman_exporter container.
Dec  3 12:57:25 np0005544501 podman[158452]: 2025-12-03 17:57:25.828618814 +0000 UTC m=+0.047161206 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 12:57:26 np0005544501 python3.9[158628]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:57:27 np0005544501 python3.9[158751]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784646.081929-748-270865642036154/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:57:28 np0005544501 python3.9[158903]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec  3 12:57:28 np0005544501 python3.9[159055]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 12:57:29 np0005544501 python3[159207]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 12:57:32 np0005544501 podman[159220]: 2025-12-03 17:57:32.319748049 +0000 UTC m=+2.351719488 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  3 12:57:32 np0005544501 podman[159318]: 2025-12-03 17:57:32.458698448 +0000 UTC m=+0.055506444 container create 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 12:57:32 np0005544501 podman[159318]: 2025-12-03 17:57:32.435821724 +0000 UTC m=+0.032629740 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  3 12:57:32 np0005544501 python3[159207]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  3 12:57:32 np0005544501 podman[159409]: 2025-12-03 17:57:32.886955293 +0000 UTC m=+0.058866110 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  3 12:57:32 np0005544501 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-5faf5f5157d672c5.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 12:57:32 np0005544501 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-5faf5f5157d672c5.service: Failed with result 'exit-code'.
Dec  3 12:57:33 np0005544501 python3.9[159529]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:57:34 np0005544501 python3.9[159683]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:34 np0005544501 python3.9[159834]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784654.1757762-801-15288926394571/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:35 np0005544501 python3.9[159910]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:57:35 np0005544501 systemd[1]: Reloading.
Dec  3 12:57:35 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:57:35 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:57:37 np0005544501 python3.9[160020]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:57:37 np0005544501 systemd[1]: Reloading.
Dec  3 12:57:37 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:57:37 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:57:37 np0005544501 systemd[1]: Starting openstack_network_exporter container...
Dec  3 12:57:37 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:57:37 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143da1461bff3dbbf0dde280109043d83cac235f94f368f80044c865e8914a7e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:37 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143da1461bff3dbbf0dde280109043d83cac235f94f368f80044c865e8914a7e/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:37 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143da1461bff3dbbf0dde280109043d83cac235f94f368f80044c865e8914a7e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:37 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.
Dec  3 12:57:37 np0005544501 podman[160060]: 2025-12-03 17:57:37.546112342 +0000 UTC m=+0.164445202 container init 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: INFO    17:57:37 main.go:48: registering *bridge.Collector
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: INFO    17:57:37 main.go:48: registering *coverage.Collector
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: INFO    17:57:37 main.go:48: registering *datapath.Collector
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: INFO    17:57:37 main.go:48: registering *iface.Collector
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: INFO    17:57:37 main.go:48: registering *memory.Collector
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: INFO    17:57:37 main.go:48: registering *ovnnorthd.Collector
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: INFO    17:57:37 main.go:48: registering *ovn.Collector
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: INFO    17:57:37 main.go:48: registering *ovsdbserver.Collector
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: INFO    17:57:37 main.go:48: registering *pmd_perf.Collector
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: INFO    17:57:37 main.go:48: registering *pmd_rxq.Collector
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: INFO    17:57:37 main.go:48: registering *vswitch.Collector
Dec  3 12:57:37 np0005544501 openstack_network_exporter[160075]: NOTICE  17:57:37 main.go:76: listening on https://:9105/metrics
Dec  3 12:57:37 np0005544501 podman[160060]: 2025-12-03 17:57:37.595730667 +0000 UTC m=+0.214063497 container start 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 12:57:37 np0005544501 podman[160060]: openstack_network_exporter
Dec  3 12:57:37 np0005544501 systemd[1]: Started openstack_network_exporter container.
Dec  3 12:57:37 np0005544501 podman[160085]: 2025-12-03 17:57:37.712623193 +0000 UTC m=+0.098776432 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 12:57:38 np0005544501 python3.9[160259]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:57:38 np0005544501 systemd[1]: Stopping openstack_network_exporter container...
Dec  3 12:57:38 np0005544501 systemd[1]: libpod-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.scope: Deactivated successfully.
Dec  3 12:57:38 np0005544501 podman[160263]: 2025-12-03 17:57:38.615933067 +0000 UTC m=+0.082385300 container died 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 12:57:38 np0005544501 systemd[1]: 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5-31a4d4195c15abb8.timer: Deactivated successfully.
Dec  3 12:57:38 np0005544501 systemd[1]: Stopped /usr/bin/podman healthcheck run 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.
Dec  3 12:57:38 np0005544501 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5-userdata-shm.mount: Deactivated successfully.
Dec  3 12:57:38 np0005544501 systemd[1]: var-lib-containers-storage-overlay-143da1461bff3dbbf0dde280109043d83cac235f94f368f80044c865e8914a7e-merged.mount: Deactivated successfully.
Dec  3 12:57:39 np0005544501 podman[160263]: 2025-12-03 17:57:39.74064784 +0000 UTC m=+1.207100073 container cleanup 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Dec  3 12:57:39 np0005544501 podman[160263]: openstack_network_exporter
Dec  3 12:57:39 np0005544501 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  3 12:57:39 np0005544501 podman[160289]: openstack_network_exporter
Dec  3 12:57:39 np0005544501 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec  3 12:57:39 np0005544501 systemd[1]: Stopped openstack_network_exporter container.
Dec  3 12:57:39 np0005544501 systemd[1]: Starting openstack_network_exporter container...
Dec  3 12:57:39 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:57:39 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143da1461bff3dbbf0dde280109043d83cac235f94f368f80044c865e8914a7e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:39 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143da1461bff3dbbf0dde280109043d83cac235f94f368f80044c865e8914a7e/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:39 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143da1461bff3dbbf0dde280109043d83cac235f94f368f80044c865e8914a7e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 12:57:39 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.
Dec  3 12:57:39 np0005544501 podman[160302]: 2025-12-03 17:57:39.996740202 +0000 UTC m=+0.141440313 container init 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, release=1755695350, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: INFO    17:57:40 main.go:48: registering *bridge.Collector
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: INFO    17:57:40 main.go:48: registering *coverage.Collector
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: INFO    17:57:40 main.go:48: registering *datapath.Collector
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: INFO    17:57:40 main.go:48: registering *iface.Collector
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: INFO    17:57:40 main.go:48: registering *memory.Collector
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: INFO    17:57:40 main.go:48: registering *ovnnorthd.Collector
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: INFO    17:57:40 main.go:48: registering *ovn.Collector
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: INFO    17:57:40 main.go:48: registering *ovsdbserver.Collector
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: INFO    17:57:40 main.go:48: registering *pmd_perf.Collector
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: INFO    17:57:40 main.go:48: registering *pmd_rxq.Collector
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: INFO    17:57:40 main.go:48: registering *vswitch.Collector
Dec  3 12:57:40 np0005544501 openstack_network_exporter[160319]: NOTICE  17:57:40 main.go:76: listening on https://:9105/metrics
Dec  3 12:57:40 np0005544501 podman[160302]: 2025-12-03 17:57:40.021020671 +0000 UTC m=+0.165720692 container start 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6)
Dec  3 12:57:40 np0005544501 podman[160302]: openstack_network_exporter
Dec  3 12:57:40 np0005544501 systemd[1]: Started openstack_network_exporter container.
Dec  3 12:57:40 np0005544501 podman[160329]: 2025-12-03 17:57:40.120759046 +0000 UTC m=+0.088263427 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 12:57:40 np0005544501 podman[160474]: 2025-12-03 17:57:40.70467482 +0000 UTC m=+0.134185451 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 12:57:40 np0005544501 python3.9[160525]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 12:57:41 np0005544501 python3.9[160680]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  3 12:57:42 np0005544501 python3.9[160845]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:57:43 np0005544501 systemd[1]: Started libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope.
Dec  3 12:57:43 np0005544501 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 12:57:43 np0005544501 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 12:57:43 np0005544501 podman[160846]: 2025-12-03 17:57:43.032049785 +0000 UTC m=+0.095785906 container exec 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 12:57:43 np0005544501 podman[160846]: 2025-12-03 17:57:43.06612239 +0000 UTC m=+0.129858571 container exec_died 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  3 12:57:43 np0005544501 systemd[1]: libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope: Deactivated successfully.
Dec  3 12:57:43 np0005544501 python3.9[161030]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:57:43 np0005544501 systemd[1]: Started libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope.
Dec  3 12:57:44 np0005544501 podman[161031]: 2025-12-03 17:57:44.00073532 +0000 UTC m=+0.073746853 container exec 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 12:57:44 np0005544501 podman[161031]: 2025-12-03 17:57:44.035850722 +0000 UTC m=+0.108862185 container exec_died 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  3 12:57:44 np0005544501 systemd[1]: libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope: Deactivated successfully.
Dec  3 12:57:44 np0005544501 podman[161186]: 2025-12-03 17:57:44.537859248 +0000 UTC m=+0.068004688 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 12:57:44 np0005544501 python3.9[161238]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:45 np0005544501 python3.9[161390]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  3 12:57:46 np0005544501 python3.9[161555]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:57:46 np0005544501 systemd[1]: Started libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope.
Dec  3 12:57:46 np0005544501 podman[161556]: 2025-12-03 17:57:46.243424889 +0000 UTC m=+0.073133788 container exec ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm)
Dec  3 12:57:46 np0005544501 podman[161556]: 2025-12-03 17:57:46.273011082 +0000 UTC m=+0.102719971 container exec_died ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  3 12:57:46 np0005544501 systemd[1]: libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope: Deactivated successfully.
Dec  3 12:57:47 np0005544501 python3.9[161738]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:57:47 np0005544501 systemd[1]: Started libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope.
Dec  3 12:57:47 np0005544501 podman[161739]: 2025-12-03 17:57:47.136740302 +0000 UTC m=+0.078471932 container exec ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  3 12:57:47 np0005544501 podman[161739]: 2025-12-03 17:57:47.169807612 +0000 UTC m=+0.111539242 container exec_died ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 12:57:47 np0005544501 systemd[1]: libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope: Deactivated successfully.
Dec  3 12:57:47 np0005544501 python3.9[161923]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:48 np0005544501 python3.9[162075]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  3 12:57:49 np0005544501 python3.9[162240]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:57:49 np0005544501 systemd[1]: Started libpod-conmon-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.scope.
Dec  3 12:57:49 np0005544501 podman[162241]: 2025-12-03 17:57:49.60324594 +0000 UTC m=+0.096209797 container exec f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 12:57:49 np0005544501 podman[162241]: 2025-12-03 17:57:49.637891291 +0000 UTC m=+0.130855198 container exec_died f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 12:57:49 np0005544501 systemd[1]: libpod-conmon-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.scope: Deactivated successfully.
Dec  3 12:57:50 np0005544501 python3.9[162424]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:57:50 np0005544501 systemd[1]: Started libpod-conmon-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.scope.
Dec  3 12:57:50 np0005544501 podman[162425]: 2025-12-03 17:57:50.502281097 +0000 UTC m=+0.069657410 container exec f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 12:57:50 np0005544501 podman[162425]: 2025-12-03 17:57:50.53583151 +0000 UTC m=+0.103207773 container exec_died f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 12:57:50 np0005544501 systemd[1]: libpod-conmon-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.scope: Deactivated successfully.
Dec  3 12:57:51 np0005544501 python3.9[162608]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:51 np0005544501 python3.9[162760]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  3 12:57:52 np0005544501 python3.9[162925]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:57:52 np0005544501 systemd[1]: Started libpod-conmon-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope.
Dec  3 12:57:52 np0005544501 podman[162926]: 2025-12-03 17:57:52.790431067 +0000 UTC m=+0.074681236 container exec 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 12:57:52 np0005544501 podman[162926]: 2025-12-03 17:57:52.825893428 +0000 UTC m=+0.110143507 container exec_died 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 12:57:52 np0005544501 systemd[1]: libpod-conmon-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope: Deactivated successfully.
Dec  3 12:57:53 np0005544501 python3.9[163110]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:57:53 np0005544501 systemd[1]: Started libpod-conmon-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope.
Dec  3 12:57:53 np0005544501 podman[163111]: 2025-12-03 17:57:53.601953576 +0000 UTC m=+0.076669216 container exec 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 12:57:53 np0005544501 podman[163111]: 2025-12-03 17:57:53.63234153 +0000 UTC m=+0.107057170 container exec_died 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 12:57:53 np0005544501 systemd[1]: libpod-conmon-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope: Deactivated successfully.
Dec  3 12:57:54 np0005544501 python3.9[163292]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:55 np0005544501 python3.9[163444]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  3 12:57:56 np0005544501 podman[163579]: 2025-12-03 17:57:56.041316524 +0000 UTC m=+0.083789926 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 12:57:56 np0005544501 python3.9[163624]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:57:56 np0005544501 systemd[1]: Started libpod-conmon-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.scope.
Dec  3 12:57:56 np0005544501 podman[163632]: 2025-12-03 17:57:56.327256295 +0000 UTC m=+0.068928013 container exec 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 12:57:56 np0005544501 podman[163632]: 2025-12-03 17:57:56.363890465 +0000 UTC m=+0.105562103 container exec_died 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container)
Dec  3 12:57:56 np0005544501 systemd[1]: libpod-conmon-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.scope: Deactivated successfully.
Dec  3 12:57:57 np0005544501 python3.9[163815]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:57:57 np0005544501 systemd[1]: Started libpod-conmon-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.scope.
Dec  3 12:57:57 np0005544501 podman[163816]: 2025-12-03 17:57:57.230648511 +0000 UTC m=+0.072683847 container exec 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, config_id=edpm)
Dec  3 12:57:57 np0005544501 podman[163816]: 2025-12-03 17:57:57.263743181 +0000 UTC m=+0.105778497 container exec_died 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git)
Dec  3 12:57:57 np0005544501 systemd[1]: libpod-conmon-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.scope: Deactivated successfully.
Dec  3 12:57:57 np0005544501 python3.9[164000]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:58 np0005544501 python3.9[164152]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:57:59 np0005544501 python3.9[164304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:00 np0005544501 python3.9[164427]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784679.1115882-1016-45354786679975/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:01 np0005544501 python3.9[164579]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:01 np0005544501 python3.9[164731]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:02 np0005544501 python3.9[164809]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:03 np0005544501 python3.9[164961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:03 np0005544501 podman[165011]: 2025-12-03 17:58:03.479838311 +0000 UTC m=+0.071072436 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  3 12:58:03 np0005544501 python3.9[165059]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.kdreamsd recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:04 np0005544501 python3.9[165212]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:04 np0005544501 python3.9[165290]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:05 np0005544501 python3.9[165442]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:58:06 np0005544501 python3[165595]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 12:58:07 np0005544501 python3.9[165747]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:07 np0005544501 python3.9[165825]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:08 np0005544501 python3.9[165977]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:09 np0005544501 python3.9[166055]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:10 np0005544501 python3.9[166207]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:10 np0005544501 podman[166257]: 2025-12-03 17:58:10.311609031 +0000 UTC m=+0.067605739 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 12:58:10 np0005544501 python3.9[166301]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:10 np0005544501 podman[166383]: 2025-12-03 17:58:10.983054103 +0000 UTC m=+0.148449689 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 12:58:11 np0005544501 python3.9[166485]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:11 np0005544501 python3.9[166563]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:12 np0005544501 python3.9[166715]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:13 np0005544501 python3.9[166840]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784692.0052834-1141-42321752210678/.source.nft follow=False _original_basename=ruleset.j2 checksum=bc835bd485c96b4ac7465e87d3a790a8d097f2aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:14 np0005544501 python3.9[166992]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:14 np0005544501 podman[167144]: 2025-12-03 17:58:14.680178692 +0000 UTC m=+0.068499254 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 12:58:14 np0005544501 python3.9[167145]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:58:15 np0005544501 python3.9[167323]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:16 np0005544501 python3.9[167475]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:58:17 np0005544501 python3.9[167628]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:58:18 np0005544501 python3.9[167782]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:58:19 np0005544501 python3.9[167937]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:19 np0005544501 systemd[1]: session-22.scope: Deactivated successfully.
Dec  3 12:58:19 np0005544501 systemd[1]: session-22.scope: Consumed 2min 11.725s CPU time.
Dec  3 12:58:19 np0005544501 systemd-logind[784]: Session 22 logged out. Waiting for processes to exit.
Dec  3 12:58:19 np0005544501 systemd-logind[784]: Removed session 22.
Dec  3 12:58:26 np0005544501 systemd-logind[784]: New session 23 of user zuul.
Dec  3 12:58:26 np0005544501 systemd[1]: Started Session 23 of User zuul.
Dec  3 12:58:26 np0005544501 podman[167967]: 2025-12-03 17:58:26.61066166 +0000 UTC m=+0.088727348 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 12:58:27 np0005544501 python3.9[168145]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:58:27 np0005544501 systemd[1]: Reloading.
Dec  3 12:58:27 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:58:27 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:58:28 np0005544501 python3.9[168332]: ansible-ansible.builtin.service_facts Invoked
Dec  3 12:58:28 np0005544501 network[168349]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 12:58:28 np0005544501 network[168350]: 'network-scripts' will be removed from distribution in near future.
Dec  3 12:58:28 np0005544501 network[168351]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 12:58:29 np0005544501 podman[158200]: time="2025-12-03T17:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 12:58:29 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Dec  3 12:58:29 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2136 "" "Go-http-client/1.1"
Dec  3 12:58:31 np0005544501 openstack_network_exporter[160319]: ERROR   17:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 12:58:31 np0005544501 openstack_network_exporter[160319]: ERROR   17:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 12:58:31 np0005544501 openstack_network_exporter[160319]: ERROR   17:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 12:58:31 np0005544501 openstack_network_exporter[160319]: ERROR   17:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 12:58:31 np0005544501 openstack_network_exporter[160319]: 
Dec  3 12:58:31 np0005544501 openstack_network_exporter[160319]: ERROR   17:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 12:58:31 np0005544501 openstack_network_exporter[160319]: 
Dec  3 12:58:33 np0005544501 podman[168576]: 2025-12-03 17:58:33.92731878 +0000 UTC m=+0.081506702 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 12:58:34 np0005544501 python3.9[168647]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:58:36 np0005544501 python3.9[168801]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:36 np0005544501 python3.9[168953]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:37 np0005544501 python3.9[169105]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:58:38 np0005544501 python3.9[169257]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 12:58:39 np0005544501 python3.9[169409]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:58:39 np0005544501 systemd[1]: Reloading.
Dec  3 12:58:39 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:58:39 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:58:40 np0005544501 podman[169568]: 2025-12-03 17:58:40.660126023 +0000 UTC m=+0.078183471 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Dec  3 12:58:40 np0005544501 python3.9[169615]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 12:58:41 np0005544501 podman[169740]: 2025-12-03 17:58:41.537041357 +0000 UTC m=+0.105422786 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  3 12:58:41 np0005544501 python3.9[169783]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:58:42 np0005544501 python3.9[169942]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:58:43 np0005544501 python3.9[170094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:43 np0005544501 python3.9[170215]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784722.7604682-125-16219807224910/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:58:44 np0005544501 podman[170368]: 2025-12-03 17:58:44.887178895 +0000 UTC m=+0.058210514 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 12:58:44 np0005544501 python3.9[170367]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  3 12:58:46 np0005544501 python3.9[170543]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:46 np0005544501 python3.9[170664]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764784725.7537687-171-26307915965627/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:47 np0005544501 python3.9[170814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:48 np0005544501 python3.9[170935]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764784727.047948-171-278249616882688/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:48 np0005544501 python3.9[171087]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:49 np0005544501 python3.9[171208]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764784728.2751324-171-8255002579051/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:50 np0005544501 python3.9[171358]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:58:50 np0005544501 python3.9[171510]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:58:51 np0005544501 python3.9[171662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:52 np0005544501 python3.9[171783]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784731.0987523-230-24924026116555/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:52 np0005544501 python3.9[171933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:53 np0005544501 python3.9[172009]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:54 np0005544501 python3.9[172159]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:54 np0005544501 python3.9[172280]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784733.5023866-230-14957450641605/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:55 np0005544501 python3.9[172430]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:56 np0005544501 python3.9[172551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784734.7568266-230-85546591287981/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:56 np0005544501 python3.9[172701]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:56 np0005544501 podman[172702]: 2025-12-03 17:58:56.904141959 +0000 UTC m=+0.054794910 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 12:58:57 np0005544501 python3.9[172845]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784736.312069-230-66316906864034/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:58 np0005544501 python3.9[172995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:58 np0005544501 python3.9[173116]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784737.599603-230-216960233574013/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:58:59 np0005544501 python3.9[173266]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:58:59 np0005544501 podman[158200]: time="2025-12-03T17:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 12:58:59 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Dec  3 12:58:59 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2149 "" "Go-http-client/1.1"
Dec  3 12:58:59 np0005544501 python3.9[173345]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:59:00 np0005544501 python3.9[173497]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:59:01 np0005544501 openstack_network_exporter[160319]: ERROR   17:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 12:59:01 np0005544501 openstack_network_exporter[160319]: ERROR   17:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 12:59:01 np0005544501 openstack_network_exporter[160319]: ERROR   17:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 12:59:01 np0005544501 openstack_network_exporter[160319]: ERROR   17:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 12:59:01 np0005544501 openstack_network_exporter[160319]: 
Dec  3 12:59:01 np0005544501 openstack_network_exporter[160319]: ERROR   17:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 12:59:01 np0005544501 openstack_network_exporter[160319]: 
Dec  3 12:59:01 np0005544501 python3.9[173649]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:59:02 np0005544501 python3.9[173801]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:59:02 np0005544501 python3.9[173953]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:59:03 np0005544501 python3.9[174076]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784742.419127-349-17470506150034/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.694 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.695 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.695 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.699 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.700 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f510e5ac0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.715 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.715 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.715 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.715 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.715 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:03 np0005544501 ceilometer_agent_compute[154682]: 2025-12-03 17:59:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 12:59:04 np0005544501 python3.9[174153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:59:04 np0005544501 podman[174248]: 2025-12-03 17:59:04.5119524 +0000 UTC m=+0.070989716 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 12:59:04 np0005544501 python3.9[174285]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784742.419127-349-17470506150034/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:59:05 np0005544501 python3.9[174449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 12:59:05 np0005544501 python3.9[174572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764784744.8394547-349-33660166384450/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 12:59:07 np0005544501 python3.9[174724]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec  3 12:59:08 np0005544501 python3.9[174876]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 12:59:09 np0005544501 python3[175028]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 12:59:10 np0005544501 podman[175067]: 2025-12-03 17:59:10.910230819 +0000 UTC m=+0.081817340 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41)
Dec  3 12:59:12 np0005544501 podman[175107]: 2025-12-03 17:59:12.918353871 +0000 UTC m=+1.083731558 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 12:59:14 np0005544501 podman[175040]: 2025-12-03 17:59:14.474952801 +0000 UTC m=+4.990262850 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  3 12:59:14 np0005544501 podman[175188]: 2025-12-03 17:59:14.66362907 +0000 UTC m=+0.064142137 container create 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 12:59:14 np0005544501 podman[175188]: 2025-12-03 17:59:14.635585955 +0000 UTC m=+0.036099002 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  3 12:59:14 np0005544501 python3[175028]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec  3 12:59:15 np0005544501 podman[175349]: 2025-12-03 17:59:15.455939498 +0000 UTC m=+0.081886581 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 12:59:15 np0005544501 python3.9[175402]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:59:16 np0005544501 python3.9[175556]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:59:17 np0005544501 python3.9[175707]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784756.762395-427-136272684828593/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:59:18 np0005544501 python3.9[175783]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:59:18 np0005544501 systemd[1]: Reloading.
Dec  3 12:59:18 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:59:18 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:59:19 np0005544501 python3.9[175894]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:59:19 np0005544501 systemd[1]: Reloading.
Dec  3 12:59:19 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:59:19 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:59:19 np0005544501 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  3 12:59:19 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:59:19 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d4c4f53be761e65a94fd9237c32e64746c6b41a7e206dc6cdd23d6255bf02f/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 12:59:19 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d4c4f53be761e65a94fd9237c32e64746c6b41a7e206dc6cdd23d6255bf02f/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 12:59:19 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d4c4f53be761e65a94fd9237c32e64746c6b41a7e206dc6cdd23d6255bf02f/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  3 12:59:19 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d4c4f53be761e65a94fd9237c32e64746c6b41a7e206dc6cdd23d6255bf02f/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  3 12:59:20 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf.
Dec  3 12:59:20 np0005544501 podman[175933]: 2025-12-03 17:59:20.029251028 +0000 UTC m=+0.174247288 container init 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: + sudo -E kolla_set_configs
Dec  3 12:59:20 np0005544501 podman[175933]: 2025-12-03 17:59:20.07506943 +0000 UTC m=+0.220065640 container start 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec  3 12:59:20 np0005544501 podman[175933]: ceilometer_agent_ipmi
Dec  3 12:59:20 np0005544501 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Validating config file
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Copying service configuration files
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: INFO:__main__:Writing out command to execute
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: ++ cat /run_command
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: + ARGS=
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: + sudo kolla_copy_cacerts
Dec  3 12:59:20 np0005544501 podman[175955]: 2025-12-03 17:59:20.179771698 +0000 UTC m=+0.077006624 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Dec  3 12:59:20 np0005544501 systemd[1]: 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf-c39be327e6c03e9.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 12:59:20 np0005544501 systemd[1]: 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf-c39be327e6c03e9.service: Failed with result 'exit-code'.
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: + [[ ! -n '' ]]
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: + . kolla_extend_start
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: + umask 0022
Dec  3 12:59:20 np0005544501 ceilometer_agent_ipmi[175948]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  3 12:59:21 np0005544501 python3.9[176130]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.023 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.023 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.023 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.023 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.024 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.024 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.024 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.024 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.024 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.024 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.024 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.024 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.024 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.024 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.024 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.025 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.026 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.027 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.028 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.031 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.032 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.032 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.032 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.032 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.032 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.032 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.032 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.032 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.032 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.033 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.033 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.033 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.033 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.033 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.033 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.033 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.033 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.033 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.033 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.033 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.034 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.034 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.034 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.034 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.034 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.034 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.034 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.034 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.034 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.034 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.035 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.036 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.037 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.038 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.038 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.057 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.058 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.059 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.141 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp70pwfvym/privsep.sock']
Dec  3 12:59:21 np0005544501 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.847 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.848 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp70pwfvym/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.709 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.715 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.719 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.720 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.956 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.957 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.960 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.961 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.961 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.961 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.961 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.962 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.962 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.962 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.962 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.963 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.963 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  3 12:59:21 np0005544501 python3.9[176291]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.969 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.970 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.970 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.970 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.970 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.970 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.971 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.971 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.971 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.971 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.971 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.972 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.972 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.972 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.972 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.973 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.973 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.973 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.973 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.974 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.974 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.974 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.974 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.974 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.975 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.975 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.975 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.975 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.976 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.976 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.976 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.976 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.976 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.976 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.977 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.977 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.977 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.977 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.977 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.978 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.978 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.978 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.979 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.979 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.979 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.979 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.980 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.980 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.980 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.980 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.981 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.981 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.981 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.982 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.982 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.982 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.982 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.983 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.983 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.983 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.983 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.984 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.984 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.984 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.984 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.985 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.985 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.985 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.986 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.986 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.986 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.986 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.987 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.987 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.987 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.988 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.988 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.988 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.988 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.989 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.989 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.989 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.989 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.990 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.990 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.990 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.990 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.990 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.991 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.991 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.991 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.991 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.992 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.992 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.992 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.992 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.992 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.993 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.993 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.993 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.993 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.994 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.994 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.994 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.995 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.995 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.995 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.995 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.995 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.996 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.996 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.996 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.997 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.997 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.998 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.998 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.998 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.998 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.998 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.998 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.998 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.999 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.999 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.999 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.999 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.999 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.999 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:21 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.999 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:21.999 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.000 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.000 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.000 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.000 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.000 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.000 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.000 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.000 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.001 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.001 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.001 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.001 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.001 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.001 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.001 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.001 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.002 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.002 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.002 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.002 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.002 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.002 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.002 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.002 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.003 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.003 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.003 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.003 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.003 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.003 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.003 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.003 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.004 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.004 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.004 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.004 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.004 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.004 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.004 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.004 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.004 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.005 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.005 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.005 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.005 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.005 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.005 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.005 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.006 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.006 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.006 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.006 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.006 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.006 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.006 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.006 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.007 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.007 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.007 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.007 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.007 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.007 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.008 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.008 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.008 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.008 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.009 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.009 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.009 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.009 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.009 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.010 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.010 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.010 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  3 12:59:22 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:22.013 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  3 12:59:22 np0005544501 python3[176447]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 12:59:28 np0005544501 podman[176541]: 2025-12-03 17:59:28.176171755 +0000 UTC m=+0.769466367 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 12:59:28 np0005544501 podman[176460]: 2025-12-03 17:59:28.921366783 +0000 UTC m=+5.873928943 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  3 12:59:29 np0005544501 podman[176682]: 2025-12-03 17:59:29.130139473 +0000 UTC m=+0.087822122 container create ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible)
Dec  3 12:59:29 np0005544501 podman[176682]: 2025-12-03 17:59:29.077343608 +0000 UTC m=+0.035026297 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  3 12:59:29 np0005544501 python3[176447]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec  3 12:59:29 np0005544501 podman[158200]: time="2025-12-03T17:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 12:59:29 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18528 "" "Go-http-client/1.1"
Dec  3 12:59:29 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2576 "" "Go-http-client/1.1"
Dec  3 12:59:30 np0005544501 python3.9[176872]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 12:59:31 np0005544501 python3.9[177026]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:59:31 np0005544501 openstack_network_exporter[160319]: ERROR   17:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 12:59:31 np0005544501 openstack_network_exporter[160319]: ERROR   17:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 12:59:31 np0005544501 openstack_network_exporter[160319]: ERROR   17:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 12:59:31 np0005544501 openstack_network_exporter[160319]: ERROR   17:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 12:59:31 np0005544501 openstack_network_exporter[160319]: 
Dec  3 12:59:31 np0005544501 openstack_network_exporter[160319]: ERROR   17:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 12:59:31 np0005544501 openstack_network_exporter[160319]: 
Dec  3 12:59:31 np0005544501 python3.9[177177]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784771.2311468-489-95149524570322/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:59:32 np0005544501 python3.9[177253]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 12:59:32 np0005544501 systemd[1]: Reloading.
Dec  3 12:59:32 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:59:32 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:59:33 np0005544501 python3.9[177363]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 12:59:33 np0005544501 systemd[1]: Reloading.
Dec  3 12:59:33 np0005544501 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 12:59:33 np0005544501 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 12:59:34 np0005544501 systemd[1]: Starting kepler container...
Dec  3 12:59:35 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:59:35 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c.
Dec  3 12:59:35 np0005544501 podman[177402]: 2025-12-03 17:59:35.437382573 +0000 UTC m=+1.365803156 container init ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, managed_by=edpm_ansible, config_id=edpm, version=9.4, architecture=x86_64, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.openshift.expose-services=, release-0.7.12=, io.buildah.version=1.29.0, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public)
Dec  3 12:59:35 np0005544501 kepler[177429]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  3 12:59:35 np0005544501 kepler[177429]: I1203 17:59:35.473847       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  3 12:59:35 np0005544501 kepler[177429]: I1203 17:59:35.473994       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  3 12:59:35 np0005544501 kepler[177429]: I1203 17:59:35.474034       1 config.go:295] kernel version: 5.14
Dec  3 12:59:35 np0005544501 kepler[177429]: I1203 17:59:35.474674       1 power.go:78] Unable to obtain power, use estimate method
Dec  3 12:59:35 np0005544501 kepler[177429]: I1203 17:59:35.474699       1 redfish.go:169] failed to get redfish credential file path
Dec  3 12:59:35 np0005544501 kepler[177429]: I1203 17:59:35.475090       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  3 12:59:35 np0005544501 kepler[177429]: I1203 17:59:35.475104       1 power.go:79] using none to obtain power
Dec  3 12:59:35 np0005544501 kepler[177429]: E1203 17:59:35.475124       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  3 12:59:35 np0005544501 kepler[177429]: E1203 17:59:35.475144       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  3 12:59:35 np0005544501 kepler[177429]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  3 12:59:35 np0005544501 kepler[177429]: I1203 17:59:35.477071       1 exporter.go:84] Number of CPUs: 8
Dec  3 12:59:35 np0005544501 podman[177402]: 2025-12-03 17:59:35.480848537 +0000 UTC m=+1.409269040 container start ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, container_name=kepler, release-0.7.12=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vendor=Red Hat, Inc.)
Dec  3 12:59:35 np0005544501 podman[177402]: kepler
Dec  3 12:59:35 np0005544501 systemd[1]: Started kepler container.
Dec  3 12:59:35 np0005544501 podman[177439]: 2025-12-03 17:59:35.60069977 +0000 UTC m=+0.102218288 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public)
Dec  3 12:59:35 np0005544501 systemd[1]: ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c-e561c4b239a4eab.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 12:59:35 np0005544501 systemd[1]: ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c-e561c4b239a4eab.service: Failed with result 'exit-code'.
Dec  3 12:59:35 np0005544501 podman[177416]: 2025-12-03 17:59:35.637294744 +0000 UTC m=+0.804692639 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.069694       1 watcher.go:83] Using in cluster k8s config
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.069940       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  3 12:59:36 np0005544501 kepler[177429]: E1203 17:59:36.070164       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.077657       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.077822       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.084612       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.084889       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.096862       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.097246       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.097640       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.112394       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.112847       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.113143       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.113429       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.113775       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.114078       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.114545       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.114907       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.115327       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.115893       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.116373       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  3 12:59:36 np0005544501 kepler[177429]: I1203 17:59:36.117264       1 exporter.go:208] Started Kepler in 643.658148ms
Dec  3 12:59:36 np0005544501 python3.9[177629]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:59:36 np0005544501 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec  3 12:59:36 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:36.593 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  3 12:59:36 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:36.695 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec  3 12:59:36 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:36.695 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec  3 12:59:36 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:36.696 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec  3 12:59:36 np0005544501 ceilometer_agent_ipmi[175948]: 2025-12-03 17:59:36.704 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec  3 12:59:36 np0005544501 systemd[1]: libpod-6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf.scope: Deactivated successfully.
Dec  3 12:59:36 np0005544501 systemd[1]: libpod-6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf.scope: Consumed 2.260s CPU time.
Dec  3 12:59:36 np0005544501 podman[177633]: 2025-12-03 17:59:36.880688804 +0000 UTC m=+0.371414330 container died 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 12:59:36 np0005544501 systemd[1]: 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf-c39be327e6c03e9.timer: Deactivated successfully.
Dec  3 12:59:36 np0005544501 systemd[1]: Stopped /usr/bin/podman healthcheck run 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf.
Dec  3 12:59:36 np0005544501 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf-userdata-shm.mount: Deactivated successfully.
Dec  3 12:59:36 np0005544501 systemd[1]: var-lib-containers-storage-overlay-62d4c4f53be761e65a94fd9237c32e64746c6b41a7e206dc6cdd23d6255bf02f-merged.mount: Deactivated successfully.
Dec  3 12:59:38 np0005544501 podman[177633]: 2025-12-03 17:59:38.031150666 +0000 UTC m=+1.521876152 container cleanup 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  3 12:59:38 np0005544501 podman[177633]: ceilometer_agent_ipmi
Dec  3 12:59:38 np0005544501 podman[177661]: ceilometer_agent_ipmi
Dec  3 12:59:38 np0005544501 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec  3 12:59:38 np0005544501 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec  3 12:59:38 np0005544501 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  3 12:59:38 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:59:38 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d4c4f53be761e65a94fd9237c32e64746c6b41a7e206dc6cdd23d6255bf02f/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 12:59:38 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d4c4f53be761e65a94fd9237c32e64746c6b41a7e206dc6cdd23d6255bf02f/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 12:59:38 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d4c4f53be761e65a94fd9237c32e64746c6b41a7e206dc6cdd23d6255bf02f/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  3 12:59:38 np0005544501 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d4c4f53be761e65a94fd9237c32e64746c6b41a7e206dc6cdd23d6255bf02f/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  3 12:59:38 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf.
Dec  3 12:59:38 np0005544501 podman[177672]: 2025-12-03 17:59:38.407098457 +0000 UTC m=+0.213929647 container init 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: + sudo -E kolla_set_configs
Dec  3 12:59:38 np0005544501 podman[177672]: 2025-12-03 17:59:38.466763822 +0000 UTC m=+0.273594962 container start 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Dec  3 12:59:38 np0005544501 podman[177672]: ceilometer_agent_ipmi
Dec  3 12:59:38 np0005544501 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  3 12:59:38 np0005544501 podman[177694]: 2025-12-03 17:59:38.546648897 +0000 UTC m=+0.070437422 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 12:59:38 np0005544501 systemd[1]: 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf-8e5933e62cdbcfd.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 12:59:38 np0005544501 systemd[1]: 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf-8e5933e62cdbcfd.service: Failed with result 'exit-code'.
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Validating config file
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Copying service configuration files
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: INFO:__main__:Writing out command to execute
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: ++ cat /run_command
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: + ARGS=
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: + sudo kolla_copy_cacerts
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: + [[ ! -n '' ]]
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: + . kolla_extend_start
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: + umask 0022
Dec  3 12:59:38 np0005544501 ceilometer_agent_ipmi[177687]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.497 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.498 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.498 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.498 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.498 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.498 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.498 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.498 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.500 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.501 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.501 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.501 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.501 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.501 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.501 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.502 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.502 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.502 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.502 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.502 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.502 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.502 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.503 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.503 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.503 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.503 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.503 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.503 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.503 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.503 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.503 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.503 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.503 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.504 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.504 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.504 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.504 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.504 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.504 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.504 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.504 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.504 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.504 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.504 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.505 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.505 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.505 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.505 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.505 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.505 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.505 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.506 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.506 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.506 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.506 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.506 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.506 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.506 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.506 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.506 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.506 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.507 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.507 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.507 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.507 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.507 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.507 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.507 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.507 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.507 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.507 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.507 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.508 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.508 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.508 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.508 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.508 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.508 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.508 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.508 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.508 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.508 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.508 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.509 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.509 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.509 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.509 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.509 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.509 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.509 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.509 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.509 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.509 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.510 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.510 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.510 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.510 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.510 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.510 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.510 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.510 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.510 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.511 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.511 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.511 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.511 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.511 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.511 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.511 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.511 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.512 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.512 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.512 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.512 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.512 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.512 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.512 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.512 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.513 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.513 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.513 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.513 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.513 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.513 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.513 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.513 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.514 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.514 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.514 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.514 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.514 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.514 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.514 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.514 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.514 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.515 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.515 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.515 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.515 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.515 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.515 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.515 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.515 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.515 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.515 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.516 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.516 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.516 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.516 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.516 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.516 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.516 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.516 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.516 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.517 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.517 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.517 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.517 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.517 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.517 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.517 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.517 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.517 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.518 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.518 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.539 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.541 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.543 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  3 12:59:39 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:39.569 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpg797khzr/privsep.sock']
Dec  3 12:59:39 np0005544501 python3.9[177869]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 12:59:39 np0005544501 systemd[1]: Stopping kepler container...
Dec  3 12:59:40 np0005544501 kepler[177429]: I1203 17:59:40.015067       1 exporter.go:218] Received shutdown signal
Dec  3 12:59:40 np0005544501 kepler[177429]: I1203 17:59:40.016388       1 exporter.go:226] Exiting...
Dec  3 12:59:40 np0005544501 systemd[1]: libpod-ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c.scope: Deactivated successfully.
Dec  3 12:59:40 np0005544501 podman[177881]: 2025-12-03 17:59:40.228127594 +0000 UTC m=+0.343916441 container died ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, version=9.4, config_id=edpm, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 12:59:40 np0005544501 systemd[1]: ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c-e561c4b239a4eab.timer: Deactivated successfully.
Dec  3 12:59:40 np0005544501 systemd[1]: Stopped /usr/bin/podman healthcheck run ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c.
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.256 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.257 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpg797khzr/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.149 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.158 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.163 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.164 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  3 12:59:40 np0005544501 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c-userdata-shm.mount: Deactivated successfully.
Dec  3 12:59:40 np0005544501 systemd[1]: var-lib-containers-storage-overlay-fcde7e4846df88624412a1f422e2500aec3dcdb7dd5efeca416c00a80d1027a3-merged.mount: Deactivated successfully.
Dec  3 12:59:40 np0005544501 podman[177881]: 2025-12-03 17:59:40.27567698 +0000 UTC m=+0.391465787 container cleanup ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-type=git, container_name=kepler, distribution-scope=public, release=1214.1726694543, maintainer=Red Hat, Inc.)
Dec  3 12:59:40 np0005544501 podman[177881]: kepler
Dec  3 12:59:40 np0005544501 podman[177911]: kepler
Dec  3 12:59:40 np0005544501 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec  3 12:59:40 np0005544501 systemd[1]: Stopped kepler container.
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.381 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.382 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.383 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.383 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.383 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.383 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.383 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.383 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.383 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.383 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.384 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.384 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.384 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.386 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.386 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.387 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.387 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.387 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.387 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.387 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.387 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.387 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.387 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.387 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.387 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.387 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.388 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.388 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 systemd[1]: Starting kepler container...
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.388 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.388 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.388 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.388 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.388 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.388 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.389 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.390 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.391 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.391 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.391 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.391 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.391 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.391 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.391 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.391 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.391 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.391 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.391 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.392 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.392 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.392 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.392 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.392 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.392 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.392 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.392 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.392 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.392 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.392 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.393 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.393 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.393 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.393 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.393 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.395 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.395 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.395 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.395 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.395 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.395 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.395 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.395 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.395 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.395 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.396 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.396 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.396 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.396 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.396 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.396 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.396 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.396 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.396 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.396 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.397 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.397 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.397 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.397 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.397 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.397 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.397 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.397 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.397 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.397 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.397 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.398 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.398 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.405 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.405 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.405 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  3 12:59:40 np0005544501 ceilometer_agent_ipmi[177687]: 2025-12-03 17:59:40.407 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  3 12:59:40 np0005544501 systemd[1]: Started libcrun container.
Dec  3 12:59:40 np0005544501 systemd[1]: Started /usr/bin/podman healthcheck run ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c.
Dec  3 12:59:40 np0005544501 podman[177925]: 2025-12-03 17:59:40.544527814 +0000 UTC m=+0.146128652 container init ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, io.openshift.tags=base rhel9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, config_id=edpm, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0)
Dec  3 12:59:40 np0005544501 kepler[177941]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  3 12:59:40 np0005544501 kepler[177941]: I1203 17:59:40.571711       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  3 12:59:40 np0005544501 kepler[177941]: I1203 17:59:40.571840       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  3 12:59:40 np0005544501 kepler[177941]: I1203 17:59:40.571862       1 config.go:295] kernel version: 5.14
Dec  3 12:59:40 np0005544501 kepler[177941]: I1203 17:59:40.572359       1 power.go:78] Unable to obtain power, use estimate method
Dec  3 12:59:40 np0005544501 kepler[177941]: I1203 17:59:40.572378       1 redfish.go:169] failed to get redfish credential file path
Dec  3 12:59:40 np0005544501 kepler[177941]: I1203 17:59:40.572841       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  3 12:59:40 np0005544501 kepler[177941]: I1203 17:59:40.572857       1 power.go:79] using none to obtain power
Dec  3 12:59:40 np0005544501 kepler[177941]: E1203 17:59:40.572872       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  3 12:59:40 np0005544501 kepler[177941]: E1203 17:59:40.572893       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  3 12:59:40 np0005544501 podman[177925]: 2025-12-03 17:59:40.572956067 +0000 UTC m=+0.174556885 container start ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 12:59:40 np0005544501 kepler[177941]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  3 12:59:40 np0005544501 kepler[177941]: I1203 17:59:40.574868       1 exporter.go:84] Number of CPUs: 8
Dec  3 12:59:40 np0005544501 podman[177925]: kepler
Dec  3 12:59:40 np0005544501 systemd[1]: Started kepler container.
Dec  3 12:59:40 np0005544501 podman[177951]: 2025-12-03 17:59:40.691706792 +0000 UTC m=+0.106557295 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, release-0.7.12=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm)
Dec  3 12:59:40 np0005544501 systemd[1]: ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c-1deb026cc4bef939.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 12:59:40 np0005544501 systemd[1]: ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c-1deb026cc4bef939.service: Failed with result 'exit-code'.
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.129061       1 watcher.go:83] Using in cluster k8s config
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.129098       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  3 12:59:41 np0005544501 kepler[177941]: E1203 17:59:41.129167       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.133864       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.133896       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.139563       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.139593       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.148722       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.148761       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.148775       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.160744       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.160789       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.160795       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.160800       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.160808       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.160823       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.160910       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.160937       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.161356       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.162600       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.163017       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  3 12:59:41 np0005544501 kepler[177941]: I1203 17:59:41.163286       1 exporter.go:208] Started Kepler in 591.800576ms
Dec  3 12:59:41 np0005544501 podman[178099]: 2025-12-03 17:59:41.23828626 +0000 UTC m=+0.122082388 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container)
Dec  3 12:59:41 np0005544501 python3.9[178155]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 12:59:42 np0005544501 python3.9[178309]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  3 12:59:44 np0005544501 python3.9[178474]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:59:44 np0005544501 systemd[1]: Started libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope.
Dec  3 12:59:44 np0005544501 podman[178475]: 2025-12-03 17:59:44.265376053 +0000 UTC m=+0.143360633 container exec 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  3 12:59:44 np0005544501 podman[178475]: 2025-12-03 17:59:44.273042883 +0000 UTC m=+0.151027493 container exec_died 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 12:59:44 np0005544501 systemd[1]: libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope: Deactivated successfully.
Dec  3 12:59:44 np0005544501 podman[178491]: 2025-12-03 17:59:44.413637398 +0000 UTC m=+0.158740664 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 12:59:45 np0005544501 python3.9[178681]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:59:45 np0005544501 systemd[1]: Started libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope.
Dec  3 12:59:45 np0005544501 podman[178682]: 2025-12-03 17:59:45.329423171 +0000 UTC m=+0.109886136 container exec 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 12:59:45 np0005544501 podman[178682]: 2025-12-03 17:59:45.362272713 +0000 UTC m=+0.142735628 container exec_died 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 12:59:45 np0005544501 systemd[1]: libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope: Deactivated successfully.
Dec  3 12:59:45 np0005544501 podman[178789]: 2025-12-03 17:59:45.945968659 +0000 UTC m=+0.105191410 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 12:59:46 np0005544501 python3.9[178885]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:59:47 np0005544501 python3.9[179037]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  3 12:59:48 np0005544501 python3.9[179201]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:59:48 np0005544501 systemd[1]: Started libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope.
Dec  3 12:59:48 np0005544501 podman[179202]: 2025-12-03 17:59:48.656344365 +0000 UTC m=+0.106949915 container exec ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 12:59:48 np0005544501 podman[179202]: 2025-12-03 17:59:48.691107413 +0000 UTC m=+0.141712883 container exec_died ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  3 12:59:48 np0005544501 systemd[1]: libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope: Deactivated successfully.
Dec  3 12:59:49 np0005544501 python3.9[179381]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:59:49 np0005544501 systemd[1]: Started libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope.
Dec  3 12:59:49 np0005544501 podman[179383]: 2025-12-03 17:59:49.82187423 +0000 UTC m=+0.133828429 container exec ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  3 12:59:49 np0005544501 podman[179383]: 2025-12-03 17:59:49.855003749 +0000 UTC m=+0.166957938 container exec_died ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  3 12:59:49 np0005544501 systemd[1]: libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope: Deactivated successfully.
Dec  3 12:59:50 np0005544501 python3.9[179566]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:59:51 np0005544501 python3.9[179718]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  3 12:59:52 np0005544501 python3.9[179881]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:59:52 np0005544501 systemd[1]: Started libpod-conmon-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.scope.
Dec  3 12:59:52 np0005544501 podman[179882]: 2025-12-03 17:59:52.988864241 +0000 UTC m=+0.113479685 container exec f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 12:59:53 np0005544501 podman[179882]: 2025-12-03 17:59:53.021793554 +0000 UTC m=+0.146408908 container exec_died f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 12:59:53 np0005544501 systemd[1]: libpod-conmon-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.scope: Deactivated successfully.
Dec  3 12:59:53 np0005544501 python3.9[180063]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:59:54 np0005544501 systemd[1]: Started libpod-conmon-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.scope.
Dec  3 12:59:54 np0005544501 podman[180064]: 2025-12-03 17:59:54.045953157 +0000 UTC m=+0.104792561 container exec f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 12:59:54 np0005544501 podman[180064]: 2025-12-03 17:59:54.077785544 +0000 UTC m=+0.136624898 container exec_died f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 12:59:54 np0005544501 systemd[1]: libpod-conmon-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.scope: Deactivated successfully.
Dec  3 12:59:55 np0005544501 python3.9[180248]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:59:55 np0005544501 python3.9[180400]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  3 12:59:56 np0005544501 python3.9[180565]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:59:57 np0005544501 systemd[1]: Started libpod-conmon-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope.
Dec  3 12:59:57 np0005544501 podman[180566]: 2025-12-03 17:59:57.040099015 +0000 UTC m=+0.104286468 container exec 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 12:59:57 np0005544501 podman[180566]: 2025-12-03 17:59:57.073321807 +0000 UTC m=+0.137509220 container exec_died 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 12:59:57 np0005544501 systemd[1]: libpod-conmon-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope: Deactivated successfully.
Dec  3 12:59:57 np0005544501 python3.9[180745]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 12:59:58 np0005544501 systemd[1]: Started libpod-conmon-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope.
Dec  3 12:59:58 np0005544501 podman[180746]: 2025-12-03 17:59:58.132245367 +0000 UTC m=+0.122512228 container exec 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 12:59:58 np0005544501 podman[180746]: 2025-12-03 17:59:58.169525459 +0000 UTC m=+0.159792230 container exec_died 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 12:59:58 np0005544501 systemd[1]: libpod-conmon-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope: Deactivated successfully.
Dec  3 12:59:58 np0005544501 podman[180875]: 2025-12-03 17:59:58.965239745 +0000 UTC m=+0.121513095 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 12:59:59 np0005544501 python3.9[180949]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 12:59:59 np0005544501 podman[158200]: time="2025-12-03T17:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 12:59:59 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18535 "" "Go-http-client/1.1"
Dec  3 12:59:59 np0005544501 podman[158200]: @ - - [03/Dec/2025:17:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2987 "" "Go-http-client/1.1"
Dec  3 13:00:00 np0005544501 python3.9[181101]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  3 13:00:01 np0005544501 python3.9[181264]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 13:00:01 np0005544501 systemd[1]: Started libpod-conmon-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.scope.
Dec  3 13:00:01 np0005544501 openstack_network_exporter[160319]: ERROR   18:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 13:00:01 np0005544501 openstack_network_exporter[160319]: ERROR   18:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 13:00:01 np0005544501 openstack_network_exporter[160319]: ERROR   18:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 13:00:01 np0005544501 podman[181265]: 2025-12-03 18:00:01.424057053 +0000 UTC m=+0.106223435 container exec 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal)
Dec  3 13:00:01 np0005544501 openstack_network_exporter[160319]: ERROR   18:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 13:00:01 np0005544501 openstack_network_exporter[160319]: 
Dec  3 13:00:01 np0005544501 openstack_network_exporter[160319]: ERROR   18:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 13:00:01 np0005544501 openstack_network_exporter[160319]: 
Dec  3 13:00:01 np0005544501 podman[181265]: 2025-12-03 18:00:01.458888074 +0000 UTC m=+0.141054456 container exec_died 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 13:00:01 np0005544501 systemd[1]: libpod-conmon-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.scope: Deactivated successfully.
Dec  3 13:00:02 np0005544501 python3.9[181447]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 13:00:02 np0005544501 systemd[1]: Started libpod-conmon-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.scope.
Dec  3 13:00:02 np0005544501 podman[181448]: 2025-12-03 18:00:02.571219676 +0000 UTC m=+0.114135043 container exec 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, distribution-scope=public)
Dec  3 13:00:02 np0005544501 podman[181448]: 2025-12-03 18:00:02.604773335 +0000 UTC m=+0.147688692 container exec_died 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 13:00:02 np0005544501 systemd[1]: libpod-conmon-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.scope: Deactivated successfully.
Dec  3 13:00:03 np0005544501 python3.9[181630]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:04 np0005544501 python3.9[181782]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec  3 13:00:05 np0005544501 python3.9[181947]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 13:00:05 np0005544501 systemd[1]: Started libpod-conmon-6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf.scope.
Dec  3 13:00:05 np0005544501 podman[181948]: 2025-12-03 18:00:05.501619039 +0000 UTC m=+0.102943365 container exec 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 13:00:05 np0005544501 podman[181948]: 2025-12-03 18:00:05.537585337 +0000 UTC m=+0.138909703 container exec_died 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  3 13:00:05 np0005544501 systemd[1]: libpod-conmon-6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf.scope: Deactivated successfully.
Dec  3 13:00:05 np0005544501 podman[182005]: 2025-12-03 18:00:05.967367769 +0000 UTC m=+0.122466987 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 13:00:06 np0005544501 python3.9[182151]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 13:00:06 np0005544501 systemd[1]: Started libpod-conmon-6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf.scope.
Dec  3 13:00:06 np0005544501 podman[182152]: 2025-12-03 18:00:06.946926959 +0000 UTC m=+0.135097330 container exec 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 13:00:06 np0005544501 podman[182152]: 2025-12-03 18:00:06.980308224 +0000 UTC m=+0.168478595 container exec_died 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  3 13:00:07 np0005544501 systemd[1]: libpod-conmon-6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf.scope: Deactivated successfully.
Dec  3 13:00:07 np0005544501 python3.9[182333]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:08 np0005544501 podman[182457]: 2025-12-03 18:00:08.755895887 +0000 UTC m=+0.097148973 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 13:00:08 np0005544501 python3.9[182503]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec  3 13:00:09 np0005544501 python3.9[182668]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 13:00:10 np0005544501 systemd[1]: Started libpod-conmon-ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c.scope.
Dec  3 13:00:10 np0005544501 podman[182669]: 2025-12-03 18:00:10.127742062 +0000 UTC m=+0.111714042 container exec ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, vendor=Red Hat, Inc., release=1214.1726694543)
Dec  3 13:00:10 np0005544501 podman[182669]: 2025-12-03 18:00:10.160969022 +0000 UTC m=+0.144940982 container exec_died ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, architecture=x86_64, name=ubi9, release=1214.1726694543)
Dec  3 13:00:10 np0005544501 systemd[1]: libpod-conmon-ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c.scope: Deactivated successfully.
Dec  3 13:00:10 np0005544501 podman[182824]: 2025-12-03 18:00:10.94462858 +0000 UTC m=+0.120582651 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, managed_by=edpm_ansible)
Dec  3 13:00:11 np0005544501 python3.9[182870]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 13:00:11 np0005544501 systemd[1]: Started libpod-conmon-ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c.scope.
Dec  3 13:00:11 np0005544501 podman[182874]: 2025-12-03 18:00:11.231109871 +0000 UTC m=+0.103930740 container exec ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, container_name=kepler, config_id=edpm, distribution-scope=public, release-0.7.12=, release=1214.1726694543, build-date=2024-09-18T21:23:30, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9)
Dec  3 13:00:11 np0005544501 podman[182874]: 2025-12-03 18:00:11.264489806 +0000 UTC m=+0.137310655 container exec_died ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, build-date=2024-09-18T21:23:30, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, release-0.7.12=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 13:00:11 np0005544501 systemd[1]: libpod-conmon-ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c.scope: Deactivated successfully.
Dec  3 13:00:11 np0005544501 podman[182905]: 2025-12-03 18:00:11.413837827 +0000 UTC m=+0.076051471 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, distribution-scope=public)
Dec  3 13:00:12 np0005544501 python3.9[183077]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:13 np0005544501 python3.9[183229]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:14 np0005544501 python3.9[183381]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 13:00:14 np0005544501 podman[183452]: 2025-12-03 18:00:14.929682009 +0000 UTC m=+0.204846833 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec  3 13:00:15 np0005544501 python3.9[183529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764784813.6393938-778-217516564231370/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:16 np0005544501 podman[183653]: 2025-12-03 18:00:16.07435108 +0000 UTC m=+0.076634726 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 13:00:16 np0005544501 python3.9[183705]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:17 np0005544501 python3.9[183857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 13:00:17 np0005544501 python3.9[183935]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:18 np0005544501 python3.9[184087]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 13:00:19 np0005544501 python3.9[184165]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.wbg0ur4_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:20 np0005544501 python3.9[184318]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 13:00:20 np0005544501 python3.9[184396]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:21 np0005544501 python3.9[184548]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 13:00:23 np0005544501 python3[184701]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 13:00:23 np0005544501 python3.9[184853]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 13:00:24 np0005544501 python3.9[184931]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:25 np0005544501 python3.9[185083]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 13:00:25 np0005544501 python3.9[185161]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:26 np0005544501 python3.9[185313]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 13:00:27 np0005544501 python3.9[185391]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:28 np0005544501 python3.9[185543]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 13:00:28 np0005544501 python3.9[185621]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:29 np0005544501 podman[185745]: 2025-12-03 18:00:29.455131124 +0000 UTC m=+0.060404065 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 13:00:29 np0005544501 python3.9[185796]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 13:00:29 np0005544501 podman[158200]: time="2025-12-03T18:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 13:00:29 np0005544501 podman[158200]: @ - - [03/Dec/2025:18:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  3 13:00:29 np0005544501 podman[158200]: @ - - [03/Dec/2025:18:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2997 "" "Go-http-client/1.1"
Dec  3 13:00:30 np0005544501 python3.9[185921]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764784829.0076587-903-201788209653883/.source.nft follow=False _original_basename=ruleset.j2 checksum=195cfcdc3ed4fc7d98b13eed88ef5cb7956fa1b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:31 np0005544501 python3.9[186073]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:31 np0005544501 openstack_network_exporter[160319]: ERROR   18:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 13:00:31 np0005544501 openstack_network_exporter[160319]: ERROR   18:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 13:00:31 np0005544501 openstack_network_exporter[160319]: ERROR   18:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 13:00:31 np0005544501 openstack_network_exporter[160319]: ERROR   18:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 13:00:31 np0005544501 openstack_network_exporter[160319]: 
Dec  3 13:00:31 np0005544501 openstack_network_exporter[160319]: ERROR   18:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 13:00:31 np0005544501 openstack_network_exporter[160319]: 
Dec  3 13:00:32 np0005544501 python3.9[186225]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 13:00:33 np0005544501 python3.9[186380]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:33 np0005544501 python3.9[186532]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 13:00:34 np0005544501 python3.9[186686]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 13:00:35 np0005544501 python3.9[186842]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 13:00:36 np0005544501 podman[186969]: 2025-12-03 18:00:36.099819401 +0000 UTC m=+0.076809817 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  3 13:00:36 np0005544501 python3.9[187017]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:36 np0005544501 systemd[1]: session-23.scope: Deactivated successfully.
Dec  3 13:00:36 np0005544501 systemd[1]: session-23.scope: Consumed 1min 53.062s CPU time.
Dec  3 13:00:36 np0005544501 systemd-logind[784]: Session 23 logged out. Waiting for processes to exit.
Dec  3 13:00:36 np0005544501 systemd-logind[784]: Removed session 23.
Dec  3 13:00:38 np0005544501 podman[187044]: 2025-12-03 18:00:38.957240218 +0000 UTC m=+0.118109070 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 13:00:41 np0005544501 podman[187068]: 2025-12-03 18:00:41.913725378 +0000 UTC m=+0.076934060 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, architecture=x86_64, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 13:00:41 np0005544501 podman[187067]: 2025-12-03 18:00:41.921627581 +0000 UTC m=+0.078216671 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 13:00:41 np0005544501 systemd-logind[784]: New session 24 of user zuul.
Dec  3 13:00:41 np0005544501 systemd[1]: Started Session 24 of User zuul.
Dec  3 13:00:43 np0005544501 python3.9[187259]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 13:00:44 np0005544501 python3.9[187417]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec  3 13:00:45 np0005544501 podman[187518]: 2025-12-03 18:00:45.457437718 +0000 UTC m=+0.136041243 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 13:00:45 np0005544501 python3.9[187596]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 13:00:46 np0005544501 podman[187652]: 2025-12-03 18:00:46.689763578 +0000 UTC m=+0.074449899 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 13:00:46 np0005544501 python3.9[187703]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 13:00:54 np0005544501 python3.9[187862]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 13:00:55 np0005544501 python3.9[187985]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764784853.9073532-54-14677593810753/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:56 np0005544501 python3.9[188137]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 13:00:57 np0005544501 python3.9[188289]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 13:00:58 np0005544501 python3.9[188412]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764784856.8428218-77-136310220326832/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:00:59 compute-0 python3.9[188564]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 18:00:59 compute-0 systemd[1]: Stopping System Logging Service...
Dec  3 18:00:59 compute-0 podman[188566]: 2025-12-03 18:00:59.640794895 +0000 UTC m=+0.107943263 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:00:59 compute-0 podman[158200]: time="2025-12-03T18:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:00:59 compute-0 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec  3 18:00:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  3 18:00:59 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec  3 18:00:59 compute-0 systemd[1]: Stopped System Logging Service.
Dec  3 18:00:59 compute-0 systemd[1]: rsyslog.service: Consumed 2.015s CPU time, 5.5M memory peak, read 0B from disk, written 3.6M to disk.
Dec  3 18:00:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Dec  3 18:00:59 compute-0 systemd[1]: Starting System Logging Service...
Dec  3 18:00:59 compute-0 rsyslogd[188590]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="188590" x-info="https://www.rsyslog.com"] start
Dec  3 18:00:59 compute-0 systemd[1]: Started System Logging Service.
Dec  3 18:00:59 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:00:59 compute-0 rsyslogd[188590]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec  3 18:00:59 compute-0 rsyslogd[188590]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec  3 18:00:59 compute-0 rsyslogd[188590]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec  3 18:01:00 compute-0 rsyslogd[188590]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec  3 18:01:00 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec  3 18:01:00 compute-0 systemd[1]: session-24.scope: Consumed 15.051s CPU time.
Dec  3 18:01:00 compute-0 systemd-logind[784]: Session 24 logged out. Waiting for processes to exit.
Dec  3 18:01:00 compute-0 systemd-logind[784]: Removed session 24.
Dec  3 18:01:01 compute-0 openstack_network_exporter[160319]: ERROR   18:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:01:01 compute-0 openstack_network_exporter[160319]: ERROR   18:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:01:01 compute-0 openstack_network_exporter[160319]: ERROR   18:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:01:01 compute-0 openstack_network_exporter[160319]: ERROR   18:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:01:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:01:01 compute-0 openstack_network_exporter[160319]: ERROR   18:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:01:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.695 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.696 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f538c8fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:01:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:01:06 compute-0 podman[188635]: 2025-12-03 18:01:06.913803432 +0000 UTC m=+0.085964971 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 18:01:08 compute-0 systemd-logind[784]: New session 25 of user zuul.
Dec  3 18:01:08 compute-0 systemd[1]: Started Session 25 of User zuul.
Dec  3 18:01:09 compute-0 podman[188732]: 2025-12-03 18:01:09.254720865 +0000 UTC m=+0.077295169 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 18:01:12 compute-0 podman[189011]: 2025-12-03 18:01:12.111911557 +0000 UTC m=+0.077834763 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=)
Dec  3 18:01:12 compute-0 podman[189012]: 2025-12-03 18:01:12.142046796 +0000 UTC m=+0.116566133 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 18:01:15 compute-0 python3[189449]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:01:15 compute-0 podman[189451]: 2025-12-03 18:01:15.735349843 +0000 UTC m=+0.150648380 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Dec  3 18:01:16 compute-0 podman[189555]: 2025-12-03 18:01:16.933855484 +0000 UTC m=+0.097825123 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:01:17 compute-0 python3[189604]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 18:01:19 compute-0 python3[189631]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 18:01:19 compute-0 python3[189658]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:01:19 compute-0 kernel: loop: module loaded
Dec  3 18:01:19 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec  3 18:01:20 compute-0 python3[189693]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:01:20 compute-0 lvm[189696]: PV /dev/loop3 not used.
Dec  3 18:01:20 compute-0 lvm[189698]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 18:01:20 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  3 18:01:20 compute-0 lvm[189704]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec  3 18:01:20 compute-0 lvm[189708]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 18:01:20 compute-0 lvm[189708]: VG ceph_vg0 finished
Dec  3 18:01:20 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  3 18:01:21 compute-0 python3[189786]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 18:01:21 compute-0 python3[189859]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784881.0124297-37301-169575550720005/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:01:22 compute-0 python3[189909]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:01:22 compute-0 systemd[1]: Reloading.
Dec  3 18:01:22 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:01:22 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:01:23 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec  3 18:01:23 compute-0 bash[189948]: /dev/loop3: [64513]:4327941 (/var/lib/ceph-osd-0.img)
Dec  3 18:01:23 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec  3 18:01:23 compute-0 lvm[189950]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 18:01:23 compute-0 lvm[189950]: VG ceph_vg0 finished
Dec  3 18:01:23 compute-0 python3[189976]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 18:01:25 compute-0 python3[190003]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 18:01:25 compute-0 python3[190029]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:01:26 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Dec  3 18:01:26 compute-0 python3[190060]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:01:26 compute-0 lvm[190065]: PV /dev/loop4 has no VG metadata.
Dec  3 18:01:26 compute-0 lvm[190065]: PV /dev/loop4 online, VG unknown.
Dec  3 18:01:26 compute-0 lvm[190065]: VG unknown
Dec  3 18:01:26 compute-0 lvm[190074]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 18:01:27 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec  3 18:01:27 compute-0 lvm[190076]:  PVs online not found for VG ceph_vg1, using all devices.
Dec  3 18:01:27 compute-0 lvm[190076]:  1 logical volume(s) in volume group "ceph_vg1" now active
Dec  3 18:01:27 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec  3 18:01:27 compute-0 python3[190154]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 18:01:28 compute-0 python3[190227]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784887.234307-37328-138138094642990/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:01:28 compute-0 python3[190277]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:01:28 compute-0 systemd[1]: Reloading.
Dec  3 18:01:28 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:01:28 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:01:29 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec  3 18:01:29 compute-0 bash[190317]: /dev/loop4: [64513]:4329160 (/var/lib/ceph-osd-1.img)
Dec  3 18:01:29 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec  3 18:01:29 compute-0 lvm[190318]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 18:01:29 compute-0 lvm[190318]: VG ceph_vg1 finished
Dec  3 18:01:29 compute-0 python3[190344]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 18:01:29 compute-0 podman[158200]: time="2025-12-03T18:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:01:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  3 18:01:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2992 "" "Go-http-client/1.1"
Dec  3 18:01:29 compute-0 podman[190346]: 2025-12-03 18:01:29.954683198 +0000 UTC m=+0.111414079 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:01:31 compute-0 python3[190394]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 18:01:31 compute-0 openstack_network_exporter[160319]: ERROR   18:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:01:31 compute-0 openstack_network_exporter[160319]: ERROR   18:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:01:31 compute-0 openstack_network_exporter[160319]: ERROR   18:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:01:31 compute-0 openstack_network_exporter[160319]: ERROR   18:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:01:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:01:31 compute-0 openstack_network_exporter[160319]: ERROR   18:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:01:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:01:31 compute-0 python3[190420]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:01:31 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Dec  3 18:01:32 compute-0 python3[190452]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:01:32 compute-0 lvm[190455]: PV /dev/loop5 not used.
Dec  3 18:01:32 compute-0 lvm[190457]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 18:01:32 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec  3 18:01:32 compute-0 lvm[190466]:  1 logical volume(s) in volume group "ceph_vg2" now active
Dec  3 18:01:32 compute-0 lvm[190468]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 18:01:32 compute-0 lvm[190468]: VG ceph_vg2 finished
Dec  3 18:01:32 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec  3 18:01:33 compute-0 python3[190546]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 18:01:33 compute-0 python3[190619]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784892.7322052-37355-23525239372238/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:01:34 compute-0 python3[190669]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:01:34 compute-0 systemd[1]: Reloading.
Dec  3 18:01:34 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:01:34 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:01:34 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec  3 18:01:34 compute-0 bash[190710]: /dev/loop5: [64513]:4391995 (/var/lib/ceph-osd-2.img)
Dec  3 18:01:34 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec  3 18:01:34 compute-0 lvm[190712]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 18:01:34 compute-0 lvm[190712]: VG ceph_vg2 finished
Dec  3 18:01:36 compute-0 python3[190736]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:01:37 compute-0 podman[190788]: 2025-12-03 18:01:37.913383811 +0000 UTC m=+0.080984824 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 18:01:39 compute-0 python3[190858]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 18:01:39 compute-0 podman[190860]: 2025-12-03 18:01:39.944326438 +0000 UTC m=+0.102406722 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 18:01:41 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 18:01:41 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec  3 18:01:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 18:01:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec  3 18:01:42 compute-0 systemd[1]: run-rf2f46825b4c14848906a5787f81b9242.service: Deactivated successfully.
Dec  3 18:01:42 compute-0 podman[190985]: 2025-12-03 18:01:42.42527619 +0000 UTC m=+0.099740607 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, managed_by=edpm_ansible, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., container_name=kepler, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, version=9.4)
Dec  3 18:01:42 compute-0 podman[190981]: 2025-12-03 18:01:42.432064914 +0000 UTC m=+0.095177257 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Dec  3 18:01:42 compute-0 python3[191038]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 18:01:43 compute-0 python3[191072]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:01:44 compute-0 python3[191133]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:01:44 compute-0 python3[191159]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:01:45 compute-0 python3[191237]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 18:01:45 compute-0 podman[191310]: 2025-12-03 18:01:45.927193504 +0000 UTC m=+0.104603945 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:01:45 compute-0 python3[191311]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784905.0378342-37506-204265002100249/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:01:46 compute-0 python3[191438]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 18:01:47 compute-0 podman[191511]: 2025-12-03 18:01:47.379740987 +0000 UTC m=+0.068924694 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:01:47 compute-0 python3[191512]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764784906.6131778-37525-164298552717020/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:01:47 compute-0 python3[191586]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 18:01:48 compute-0 python3[191614]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 18:01:48 compute-0 python3[191642]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 18:01:49 compute-0 python3[191670]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:01:49 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec  3 18:01:49 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  3 18:01:49 compute-0 systemd-logind[784]: New session 26 of user ceph-admin.
Dec  3 18:01:49 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  3 18:01:49 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec  3 18:01:49 compute-0 systemd[191691]: Queued start job for default target Main User Target.
Dec  3 18:01:49 compute-0 systemd[191691]: Created slice User Application Slice.
Dec  3 18:01:49 compute-0 systemd[191691]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  3 18:01:49 compute-0 systemd[191691]: Started Daily Cleanup of User's Temporary Directories.
Dec  3 18:01:49 compute-0 systemd[191691]: Reached target Paths.
Dec  3 18:01:49 compute-0 systemd[191691]: Reached target Timers.
Dec  3 18:01:49 compute-0 systemd[191691]: Starting D-Bus User Message Bus Socket...
Dec  3 18:01:49 compute-0 systemd[191691]: Starting Create User's Volatile Files and Directories...
Dec  3 18:01:49 compute-0 systemd[191691]: Listening on D-Bus User Message Bus Socket.
Dec  3 18:01:49 compute-0 systemd[191691]: Reached target Sockets.
Dec  3 18:01:49 compute-0 systemd[191691]: Finished Create User's Volatile Files and Directories.
Dec  3 18:01:49 compute-0 systemd[191691]: Reached target Basic System.
Dec  3 18:01:49 compute-0 systemd[191691]: Reached target Main User Target.
Dec  3 18:01:49 compute-0 systemd[191691]: Startup finished in 159ms.
Dec  3 18:01:49 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec  3 18:01:49 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec  3 18:01:50 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Dec  3 18:01:50 compute-0 systemd-logind[784]: Session 26 logged out. Waiting for processes to exit.
Dec  3 18:01:50 compute-0 systemd-logind[784]: Removed session 26.
Dec  3 18:01:59 compute-0 podman[158200]: time="2025-12-03T18:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:01:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  3 18:01:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2997 "" "Go-http-client/1.1"
Dec  3 18:02:00 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec  3 18:02:00 compute-0 systemd[191691]: Activating special unit Exit the Session...
Dec  3 18:02:00 compute-0 systemd[191691]: Stopped target Main User Target.
Dec  3 18:02:00 compute-0 systemd[191691]: Stopped target Basic System.
Dec  3 18:02:00 compute-0 systemd[191691]: Stopped target Paths.
Dec  3 18:02:00 compute-0 systemd[191691]: Stopped target Sockets.
Dec  3 18:02:00 compute-0 systemd[191691]: Stopped target Timers.
Dec  3 18:02:00 compute-0 systemd[191691]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  3 18:02:00 compute-0 systemd[191691]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  3 18:02:00 compute-0 systemd[191691]: Closed D-Bus User Message Bus Socket.
Dec  3 18:02:00 compute-0 systemd[191691]: Stopped Create User's Volatile Files and Directories.
Dec  3 18:02:00 compute-0 systemd[191691]: Removed slice User Application Slice.
Dec  3 18:02:00 compute-0 systemd[191691]: Reached target Shutdown.
Dec  3 18:02:00 compute-0 systemd[191691]: Finished Exit the Session.
Dec  3 18:02:00 compute-0 systemd[191691]: Reached target Exit the Session.
Dec  3 18:02:00 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec  3 18:02:00 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec  3 18:02:00 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  3 18:02:00 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  3 18:02:00 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  3 18:02:00 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  3 18:02:00 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec  3 18:02:00 compute-0 podman[191787]: 2025-12-03 18:02:00.448111212 +0000 UTC m=+0.103895617 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:02:01 compute-0 openstack_network_exporter[160319]: ERROR   18:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:02:01 compute-0 openstack_network_exporter[160319]: ERROR   18:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:02:01 compute-0 openstack_network_exporter[160319]: ERROR   18:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:02:01 compute-0 openstack_network_exporter[160319]: ERROR   18:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:02:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:02:01 compute-0 openstack_network_exporter[160319]: ERROR   18:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:02:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:02:16 compute-0 podman[191826]: 2025-12-03 18:02:16.433835677 +0000 UTC m=+7.602601163 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 18:02:16 compute-0 podman[191835]: 2025-12-03 18:02:16.455597372 +0000 UTC m=+5.625610368 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 18:02:16 compute-0 podman[191849]: 2025-12-03 18:02:16.459103806 +0000 UTC m=+3.631768407 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 18:02:16 compute-0 podman[191850]: 2025-12-03 18:02:16.459115397 +0000 UTC m=+3.630287552 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.openshift.tags=base rhel9, name=ubi9, io.openshift.expose-services=, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm)
Dec  3 18:02:16 compute-0 podman[191747]: 2025-12-03 18:02:16.515684262 +0000 UTC m=+26.321965048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:16 compute-0 podman[191925]: 2025-12-03 18:02:16.591622424 +0000 UTC m=+0.050912039 container create 39a73dae4276872ea451c2e39cd5b0c8df01a8c09ecdc6bf67f4ac96a85ed4a4 (image=quay.io/ceph/ceph:v18, name=exciting_rhodes, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:02:16 compute-0 podman[191905]: 2025-12-03 18:02:16.593308104 +0000 UTC m=+0.131302399 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:02:16 compute-0 systemd[1]: Started libpod-conmon-39a73dae4276872ea451c2e39cd5b0c8df01a8c09ecdc6bf67f4ac96a85ed4a4.scope.
Dec  3 18:02:16 compute-0 podman[191925]: 2025-12-03 18:02:16.571527779 +0000 UTC m=+0.030817424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:16 compute-0 podman[191925]: 2025-12-03 18:02:16.709991909 +0000 UTC m=+0.169281554 container init 39a73dae4276872ea451c2e39cd5b0c8df01a8c09ecdc6bf67f4ac96a85ed4a4 (image=quay.io/ceph/ceph:v18, name=exciting_rhodes, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:02:16 compute-0 podman[191925]: 2025-12-03 18:02:16.722292846 +0000 UTC m=+0.181582461 container start 39a73dae4276872ea451c2e39cd5b0c8df01a8c09ecdc6bf67f4ac96a85ed4a4 (image=quay.io/ceph/ceph:v18, name=exciting_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:02:16 compute-0 podman[191925]: 2025-12-03 18:02:16.727397009 +0000 UTC m=+0.186686654 container attach 39a73dae4276872ea451c2e39cd5b0c8df01a8c09ecdc6bf67f4ac96a85ed4a4 (image=quay.io/ceph/ceph:v18, name=exciting_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:02:17 compute-0 exciting_rhodes[191947]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec  3 18:02:17 compute-0 systemd[1]: libpod-39a73dae4276872ea451c2e39cd5b0c8df01a8c09ecdc6bf67f4ac96a85ed4a4.scope: Deactivated successfully.
Dec  3 18:02:17 compute-0 podman[191925]: 2025-12-03 18:02:17.038011593 +0000 UTC m=+0.497301228 container died 39a73dae4276872ea451c2e39cd5b0c8df01a8c09ecdc6bf67f4ac96a85ed4a4 (image=quay.io/ceph/ceph:v18, name=exciting_rhodes, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-842d1eed8773d830e7d0a2eb69a481aaa32e5c60020d8bca8bc88ce2f4a27488-merged.mount: Deactivated successfully.
Dec  3 18:02:17 compute-0 podman[191925]: 2025-12-03 18:02:17.107699175 +0000 UTC m=+0.566988790 container remove 39a73dae4276872ea451c2e39cd5b0c8df01a8c09ecdc6bf67f4ac96a85ed4a4 (image=quay.io/ceph/ceph:v18, name=exciting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:02:17 compute-0 systemd[1]: libpod-conmon-39a73dae4276872ea451c2e39cd5b0c8df01a8c09ecdc6bf67f4ac96a85ed4a4.scope: Deactivated successfully.
Dec  3 18:02:17 compute-0 podman[191962]: 2025-12-03 18:02:17.202847949 +0000 UTC m=+0.064331722 container create 8738a369f4619299a87839f0bb7fd06fbef8ddd13d23e9d6a3392e8960a12a80 (image=quay.io/ceph/ceph:v18, name=recursing_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 18:02:17 compute-0 systemd[1]: Started libpod-conmon-8738a369f4619299a87839f0bb7fd06fbef8ddd13d23e9d6a3392e8960a12a80.scope.
Dec  3 18:02:17 compute-0 podman[191962]: 2025-12-03 18:02:17.179487876 +0000 UTC m=+0.040971689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:17 compute-0 podman[191962]: 2025-12-03 18:02:17.306385287 +0000 UTC m=+0.167869070 container init 8738a369f4619299a87839f0bb7fd06fbef8ddd13d23e9d6a3392e8960a12a80 (image=quay.io/ceph/ceph:v18, name=recursing_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 18:02:17 compute-0 podman[191962]: 2025-12-03 18:02:17.317391803 +0000 UTC m=+0.178875576 container start 8738a369f4619299a87839f0bb7fd06fbef8ddd13d23e9d6a3392e8960a12a80 (image=quay.io/ceph/ceph:v18, name=recursing_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:02:17 compute-0 recursing_heisenberg[191979]: 167 167
Dec  3 18:02:17 compute-0 podman[191962]: 2025-12-03 18:02:17.321767618 +0000 UTC m=+0.183251381 container attach 8738a369f4619299a87839f0bb7fd06fbef8ddd13d23e9d6a3392e8960a12a80 (image=quay.io/ceph/ceph:v18, name=recursing_heisenberg, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:02:17 compute-0 systemd[1]: libpod-8738a369f4619299a87839f0bb7fd06fbef8ddd13d23e9d6a3392e8960a12a80.scope: Deactivated successfully.
Dec  3 18:02:17 compute-0 podman[191962]: 2025-12-03 18:02:17.322486526 +0000 UTC m=+0.183970289 container died 8738a369f4619299a87839f0bb7fd06fbef8ddd13d23e9d6a3392e8960a12a80 (image=quay.io/ceph/ceph:v18, name=recursing_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 18:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4ebdefe2415a64874a9267fdf4d8605d71ec30a00b93cfdd6bdf0e47a9ce260-merged.mount: Deactivated successfully.
Dec  3 18:02:17 compute-0 podman[191962]: 2025-12-03 18:02:17.372641156 +0000 UTC m=+0.234124919 container remove 8738a369f4619299a87839f0bb7fd06fbef8ddd13d23e9d6a3392e8960a12a80 (image=quay.io/ceph/ceph:v18, name=recursing_heisenberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:02:17 compute-0 systemd[1]: libpod-conmon-8738a369f4619299a87839f0bb7fd06fbef8ddd13d23e9d6a3392e8960a12a80.scope: Deactivated successfully.
Dec  3 18:02:17 compute-0 podman[191996]: 2025-12-03 18:02:17.459085811 +0000 UTC m=+0.061406122 container create 48492a2086d1bd831dc749cfac3fc7869301895bd635a505a74a249b8fe425c4 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:02:17 compute-0 podman[192002]: 2025-12-03 18:02:17.496941625 +0000 UTC m=+0.073442003 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:02:17 compute-0 systemd[1]: Started libpod-conmon-48492a2086d1bd831dc749cfac3fc7869301895bd635a505a74a249b8fe425c4.scope.
Dec  3 18:02:17 compute-0 podman[191996]: 2025-12-03 18:02:17.435638636 +0000 UTC m=+0.037958997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:17 compute-0 podman[191996]: 2025-12-03 18:02:17.559287369 +0000 UTC m=+0.161607700 container init 48492a2086d1bd831dc749cfac3fc7869301895bd635a505a74a249b8fe425c4 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:17 compute-0 podman[191996]: 2025-12-03 18:02:17.570392397 +0000 UTC m=+0.172712698 container start 48492a2086d1bd831dc749cfac3fc7869301895bd635a505a74a249b8fe425c4 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:02:17 compute-0 podman[191996]: 2025-12-03 18:02:17.574939726 +0000 UTC m=+0.177260127 container attach 48492a2086d1bd831dc749cfac3fc7869301895bd635a505a74a249b8fe425c4 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:02:17 compute-0 busy_wescoff[192035]: AQApezBp2sxCIxAAWd079whe9w2LMmtGbKogVw==
Dec  3 18:02:17 compute-0 systemd[1]: libpod-48492a2086d1bd831dc749cfac3fc7869301895bd635a505a74a249b8fe425c4.scope: Deactivated successfully.
Dec  3 18:02:17 compute-0 podman[191996]: 2025-12-03 18:02:17.597238684 +0000 UTC m=+0.199558995 container died 48492a2086d1bd831dc749cfac3fc7869301895bd635a505a74a249b8fe425c4 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a411dad61604dc1348ec9a31bc4dc6eb04e88b3b18516a45cf1946e438c6b3cc-merged.mount: Deactivated successfully.
Dec  3 18:02:17 compute-0 podman[191996]: 2025-12-03 18:02:17.650243023 +0000 UTC m=+0.252563334 container remove 48492a2086d1bd831dc749cfac3fc7869301895bd635a505a74a249b8fe425c4 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:17 compute-0 systemd[1]: libpod-conmon-48492a2086d1bd831dc749cfac3fc7869301895bd635a505a74a249b8fe425c4.scope: Deactivated successfully.
Dec  3 18:02:17 compute-0 podman[192056]: 2025-12-03 18:02:17.751090326 +0000 UTC m=+0.072077670 container create 911c1d28f2e9256a76b63fba0ba26df9d5bb69448fe547888825b0a04b199dad (image=quay.io/ceph/ceph:v18, name=dazzling_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 18:02:17 compute-0 podman[192056]: 2025-12-03 18:02:17.71813422 +0000 UTC m=+0.039121644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:17 compute-0 systemd[1]: Started libpod-conmon-911c1d28f2e9256a76b63fba0ba26df9d5bb69448fe547888825b0a04b199dad.scope.
Dec  3 18:02:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:17 compute-0 podman[192056]: 2025-12-03 18:02:17.860625638 +0000 UTC m=+0.181613072 container init 911c1d28f2e9256a76b63fba0ba26df9d5bb69448fe547888825b0a04b199dad (image=quay.io/ceph/ceph:v18, name=dazzling_banach, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  3 18:02:17 compute-0 podman[192056]: 2025-12-03 18:02:17.871131972 +0000 UTC m=+0.192119346 container start 911c1d28f2e9256a76b63fba0ba26df9d5bb69448fe547888825b0a04b199dad (image=quay.io/ceph/ceph:v18, name=dazzling_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:17 compute-0 podman[192056]: 2025-12-03 18:02:17.87809716 +0000 UTC m=+0.199084594 container attach 911c1d28f2e9256a76b63fba0ba26df9d5bb69448fe547888825b0a04b199dad (image=quay.io/ceph/ceph:v18, name=dazzling_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:02:17 compute-0 dazzling_banach[192072]: AQApezBpB3QvNRAAJR2AQYUp40EPTY/H7qz23w==
Dec  3 18:02:17 compute-0 systemd[1]: libpod-911c1d28f2e9256a76b63fba0ba26df9d5bb69448fe547888825b0a04b199dad.scope: Deactivated successfully.
Dec  3 18:02:17 compute-0 podman[192056]: 2025-12-03 18:02:17.898163824 +0000 UTC m=+0.219151178 container died 911c1d28f2e9256a76b63fba0ba26df9d5bb69448fe547888825b0a04b199dad (image=quay.io/ceph/ceph:v18, name=dazzling_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-198fef52c7cb42f596ef2c9de5831ae54cd8d85bad7c9cd7542ce0ad768463b8-merged.mount: Deactivated successfully.
Dec  3 18:02:17 compute-0 podman[192056]: 2025-12-03 18:02:17.96266113 +0000 UTC m=+0.283648464 container remove 911c1d28f2e9256a76b63fba0ba26df9d5bb69448fe547888825b0a04b199dad (image=quay.io/ceph/ceph:v18, name=dazzling_banach, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:02:17 compute-0 systemd[1]: libpod-conmon-911c1d28f2e9256a76b63fba0ba26df9d5bb69448fe547888825b0a04b199dad.scope: Deactivated successfully.
Dec  3 18:02:18 compute-0 podman[192090]: 2025-12-03 18:02:18.053227395 +0000 UTC m=+0.065475911 container create d8aa13a5ce80413fb45f5019602df4b9d2e472477a96bdedb6b92e18344f566d (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:02:18 compute-0 podman[192090]: 2025-12-03 18:02:18.01615142 +0000 UTC m=+0.028399946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:18 compute-0 systemd[1]: Started libpod-conmon-d8aa13a5ce80413fb45f5019602df4b9d2e472477a96bdedb6b92e18344f566d.scope.
Dec  3 18:02:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:19 compute-0 podman[192090]: 2025-12-03 18:02:19.078193032 +0000 UTC m=+1.090441618 container init d8aa13a5ce80413fb45f5019602df4b9d2e472477a96bdedb6b92e18344f566d (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:02:19 compute-0 podman[192090]: 2025-12-03 18:02:19.085734864 +0000 UTC m=+1.097983370 container start d8aa13a5ce80413fb45f5019602df4b9d2e472477a96bdedb6b92e18344f566d (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:02:19 compute-0 podman[192090]: 2025-12-03 18:02:19.0934296 +0000 UTC m=+1.105678166 container attach d8aa13a5ce80413fb45f5019602df4b9d2e472477a96bdedb6b92e18344f566d (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:02:19 compute-0 unruffled_hermann[192105]: AQArezBpeFbDBxAAJLmHf46D1+PIomP6ULANeg==
Dec  3 18:02:19 compute-0 systemd[1]: libpod-d8aa13a5ce80413fb45f5019602df4b9d2e472477a96bdedb6b92e18344f566d.scope: Deactivated successfully.
Dec  3 18:02:19 compute-0 podman[192090]: 2025-12-03 18:02:19.139197744 +0000 UTC m=+1.151446250 container died d8aa13a5ce80413fb45f5019602df4b9d2e472477a96bdedb6b92e18344f566d (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b785658424d4f1b64e9dd773cd98829bcdc0b7b16bd21ebb0a11441902d66eeb-merged.mount: Deactivated successfully.
Dec  3 18:02:19 compute-0 podman[192090]: 2025-12-03 18:02:19.237061745 +0000 UTC m=+1.249310251 container remove d8aa13a5ce80413fb45f5019602df4b9d2e472477a96bdedb6b92e18344f566d (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:02:19 compute-0 systemd[1]: libpod-conmon-d8aa13a5ce80413fb45f5019602df4b9d2e472477a96bdedb6b92e18344f566d.scope: Deactivated successfully.
Dec  3 18:02:19 compute-0 podman[192125]: 2025-12-03 18:02:19.312897204 +0000 UTC m=+0.050840277 container create aa263106b83df7072bb8a7e2ae377e7938f5f4d492ade1c25ffcf21ac476c431 (image=quay.io/ceph/ceph:v18, name=hardcore_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:02:19 compute-0 systemd[1]: Started libpod-conmon-aa263106b83df7072bb8a7e2ae377e7938f5f4d492ade1c25ffcf21ac476c431.scope.
Dec  3 18:02:19 compute-0 podman[192125]: 2025-12-03 18:02:19.292006241 +0000 UTC m=+0.029949334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf3e53326a4d2e0f17463f2a86810f4d8a20804af011cfc9e8e76a9e54d5452/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:19 compute-0 podman[192125]: 2025-12-03 18:02:19.414730341 +0000 UTC m=+0.152673474 container init aa263106b83df7072bb8a7e2ae377e7938f5f4d492ade1c25ffcf21ac476c431 (image=quay.io/ceph/ceph:v18, name=hardcore_lovelace, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:02:19 compute-0 podman[192125]: 2025-12-03 18:02:19.429712353 +0000 UTC m=+0.167655446 container start aa263106b83df7072bb8a7e2ae377e7938f5f4d492ade1c25ffcf21ac476c431 (image=quay.io/ceph/ceph:v18, name=hardcore_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:02:19 compute-0 podman[192125]: 2025-12-03 18:02:19.437259484 +0000 UTC m=+0.175202627 container attach aa263106b83df7072bb8a7e2ae377e7938f5f4d492ade1c25ffcf21ac476c431 (image=quay.io/ceph/ceph:v18, name=hardcore_lovelace, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:02:19 compute-0 hardcore_lovelace[192141]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec  3 18:02:19 compute-0 hardcore_lovelace[192141]: setting min_mon_release = pacific
Dec  3 18:02:19 compute-0 hardcore_lovelace[192141]: /usr/bin/monmaptool: set fsid to c1caf3ba-b2a5-5005-a11e-e955c344dccc
Dec  3 18:02:19 compute-0 hardcore_lovelace[192141]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec  3 18:02:19 compute-0 systemd[1]: libpod-aa263106b83df7072bb8a7e2ae377e7938f5f4d492ade1c25ffcf21ac476c431.scope: Deactivated successfully.
Dec  3 18:02:19 compute-0 podman[192125]: 2025-12-03 18:02:19.46605591 +0000 UTC m=+0.203999013 container died aa263106b83df7072bb8a7e2ae377e7938f5f4d492ade1c25ffcf21ac476c431 (image=quay.io/ceph/ceph:v18, name=hardcore_lovelace, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf3e53326a4d2e0f17463f2a86810f4d8a20804af011cfc9e8e76a9e54d5452-merged.mount: Deactivated successfully.
Dec  3 18:02:19 compute-0 podman[192125]: 2025-12-03 18:02:19.532650896 +0000 UTC m=+0.270593969 container remove aa263106b83df7072bb8a7e2ae377e7938f5f4d492ade1c25ffcf21ac476c431 (image=quay.io/ceph/ceph:v18, name=hardcore_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:02:19 compute-0 systemd[1]: libpod-conmon-aa263106b83df7072bb8a7e2ae377e7938f5f4d492ade1c25ffcf21ac476c431.scope: Deactivated successfully.
Dec  3 18:02:19 compute-0 podman[192159]: 2025-12-03 18:02:19.62025979 +0000 UTC m=+0.064088928 container create e03577d0c22697cc8c571517701be56d40a2e52a657dffcab214d46b383ba86b (image=quay.io/ceph/ceph:v18, name=pensive_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:02:19 compute-0 systemd[1]: Started libpod-conmon-e03577d0c22697cc8c571517701be56d40a2e52a657dffcab214d46b383ba86b.scope.
Dec  3 18:02:19 compute-0 podman[192159]: 2025-12-03 18:02:19.588432021 +0000 UTC m=+0.032261209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d9911d9bece353a71b72ae38df86371d201181236a1429bb370c82d3147ddc/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d9911d9bece353a71b72ae38df86371d201181236a1429bb370c82d3147ddc/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d9911d9bece353a71b72ae38df86371d201181236a1429bb370c82d3147ddc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d9911d9bece353a71b72ae38df86371d201181236a1429bb370c82d3147ddc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:19 compute-0 podman[192159]: 2025-12-03 18:02:19.7777731 +0000 UTC m=+0.221602318 container init e03577d0c22697cc8c571517701be56d40a2e52a657dffcab214d46b383ba86b (image=quay.io/ceph/ceph:v18, name=pensive_maxwell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:02:19 compute-0 podman[192159]: 2025-12-03 18:02:19.791965152 +0000 UTC m=+0.235794300 container start e03577d0c22697cc8c571517701be56d40a2e52a657dffcab214d46b383ba86b (image=quay.io/ceph/ceph:v18, name=pensive_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 18:02:19 compute-0 podman[192159]: 2025-12-03 18:02:19.797988717 +0000 UTC m=+0.241817895 container attach e03577d0c22697cc8c571517701be56d40a2e52a657dffcab214d46b383ba86b (image=quay.io/ceph/ceph:v18, name=pensive_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 18:02:19 compute-0 systemd[1]: libpod-e03577d0c22697cc8c571517701be56d40a2e52a657dffcab214d46b383ba86b.scope: Deactivated successfully.
Dec  3 18:02:19 compute-0 podman[192159]: 2025-12-03 18:02:19.893128503 +0000 UTC m=+0.336957671 container died e03577d0c22697cc8c571517701be56d40a2e52a657dffcab214d46b383ba86b (image=quay.io/ceph/ceph:v18, name=pensive_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 18:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-54d9911d9bece353a71b72ae38df86371d201181236a1429bb370c82d3147ddc-merged.mount: Deactivated successfully.
Dec  3 18:02:19 compute-0 podman[192159]: 2025-12-03 18:02:19.950881996 +0000 UTC m=+0.394711134 container remove e03577d0c22697cc8c571517701be56d40a2e52a657dffcab214d46b383ba86b (image=quay.io/ceph/ceph:v18, name=pensive_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:02:19 compute-0 systemd[1]: libpod-conmon-e03577d0c22697cc8c571517701be56d40a2e52a657dffcab214d46b383ba86b.scope: Deactivated successfully.
Dec  3 18:02:20 compute-0 systemd[1]: Reloading.
Dec  3 18:02:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:02:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:02:20 compute-0 systemd[1]: Reloading.
Dec  3 18:02:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:02:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:02:20 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec  3 18:02:20 compute-0 systemd[1]: Reloading.
Dec  3 18:02:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:02:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:02:21 compute-0 systemd[1]: Reached target Ceph cluster c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:02:21 compute-0 systemd[1]: Reloading.
Dec  3 18:02:21 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:02:21 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:02:21 compute-0 systemd[1]: Reloading.
Dec  3 18:02:21 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:02:21 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:02:21 compute-0 systemd[1]: Created slice Slice /system/ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:02:21 compute-0 systemd[1]: Reached target System Time Set.
Dec  3 18:02:21 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec  3 18:02:21 compute-0 systemd[1]: Starting Ceph mon.compute-0 for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:02:22 compute-0 podman[192445]: 2025-12-03 18:02:22.349794239 +0000 UTC m=+0.103615921 container create 3f962d09ac6994e4b5e22edd8278bda42eecdb955056978e9374cfa17fac20f5 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 18:02:22 compute-0 podman[192445]: 2025-12-03 18:02:22.289747182 +0000 UTC m=+0.043568994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4ce72116c1cdad135b848e4c9f75c2a06b0941f5f7d45147bfe867c01e2444/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4ce72116c1cdad135b848e4c9f75c2a06b0941f5f7d45147bfe867c01e2444/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4ce72116c1cdad135b848e4c9f75c2a06b0941f5f7d45147bfe867c01e2444/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4ce72116c1cdad135b848e4c9f75c2a06b0941f5f7d45147bfe867c01e2444/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:22 compute-0 podman[192445]: 2025-12-03 18:02:22.463731478 +0000 UTC m=+0.217553200 container init 3f962d09ac6994e4b5e22edd8278bda42eecdb955056978e9374cfa17fac20f5 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:02:22 compute-0 podman[192445]: 2025-12-03 18:02:22.479640152 +0000 UTC m=+0.233461834 container start 3f962d09ac6994e4b5e22edd8278bda42eecdb955056978e9374cfa17fac20f5 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:02:22 compute-0 bash[192445]: 3f962d09ac6994e4b5e22edd8278bda42eecdb955056978e9374cfa17fac20f5
Dec  3 18:02:22 compute-0 systemd[1]: Started Ceph mon.compute-0 for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:02:22 compute-0 ceph-mon[192463]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 18:02:22 compute-0 ceph-mon[192463]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec  3 18:02:22 compute-0 ceph-mon[192463]: pidfile_write: ignore empty --pid-file
Dec  3 18:02:22 compute-0 ceph-mon[192463]: load: jerasure load: lrc 
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: RocksDB version: 7.9.2
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Git sha 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: DB SUMMARY
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: DB Session ID:  CG3BS9632RY6OG5BUDED
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: CURRENT file:  CURRENT
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                         Options.error_if_exists: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                       Options.create_if_missing: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                                     Options.env: 0x561972accc40
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                                Options.info_log: 0x5619742a0e80
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                              Options.statistics: (nil)
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                               Options.use_fsync: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                              Options.db_log_dir: 
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                                 Options.wal_dir: 
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                    Options.write_buffer_manager: 0x5619742b0b40
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                  Options.unordered_write: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                               Options.row_cache: None
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                              Options.wal_filter: None
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.two_write_queues: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.wal_compression: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.atomic_flush: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.max_background_jobs: 2
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.max_background_compactions: -1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.max_subcompactions: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.max_total_wal_size: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                          Options.max_open_files: -1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:       Options.compaction_readahead_size: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Compression algorithms supported:
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: #011kZSTD supported: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: #011kXpressCompression supported: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: #011kZlibCompression supported: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:           Options.merge_operator: 
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:        Options.compaction_filter: None
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5619742a0a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5619742991f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:        Options.write_buffer_size: 33554432
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:  Options.max_write_buffer_number: 2
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:          Options.compression: NoCompression
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.num_levels: 7
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a1ac3b74-8599-4a51-8b4c-6fd35a134427
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764784942543005, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764784942545804, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "CG3BS9632RY6OG5BUDED", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764784942545912, "job": 1, "event": "recovery_finished"}
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5619742c2e00
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: DB pointer 0x56197434c000
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:02:22 compute-0 ceph-mon[192463]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619742991f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 18:02:22 compute-0 ceph-mon[192463]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@-1(???) e0 preinit fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(probing) e0 win_standalone_election
Dec  3 18:02:22 compute-0 ceph-mon[192463]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 18:02:22 compute-0 ceph-mon[192463]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  3 18:02:22 compute-0 ceph-mon[192463]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 18:02:22 compute-0 ceph-mon[192463]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 18:02:22 compute-0 ceph-mon[192463]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-12-03T18:02:19.829148Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Dec  3 18:02:22 compute-0 podman[192464]: 2025-12-03 18:02:22.592379092 +0000 UTC m=+0.051740279 container create 353a24dce2208d1378162ba737eeea41fcea083eda46eb37c3bd77fc76c5378c (image=quay.io/ceph/ceph:v18, name=vibrant_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).mds e1 new map
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  3 18:02:22 compute-0 ceph-mon[192463]: log_channel(cluster) log [DBG] : fsmap 
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mkfs c1caf3ba-b2a5-5005-a11e-e955c344dccc
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec  3 18:02:22 compute-0 ceph-mon[192463]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  3 18:02:22 compute-0 ceph-mon[192463]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 18:02:22 compute-0 ceph-mon[192463]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  3 18:02:22 compute-0 systemd[1]: Started libpod-conmon-353a24dce2208d1378162ba737eeea41fcea083eda46eb37c3bd77fc76c5378c.scope.
Dec  3 18:02:22 compute-0 podman[192464]: 2025-12-03 18:02:22.572541204 +0000 UTC m=+0.031902401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0092a9f886aaf26d9b9ae0c6e74a0a034185b81f06a82be4e8710e948eb6d30a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0092a9f886aaf26d9b9ae0c6e74a0a034185b81f06a82be4e8710e948eb6d30a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0092a9f886aaf26d9b9ae0c6e74a0a034185b81f06a82be4e8710e948eb6d30a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:22 compute-0 podman[192464]: 2025-12-03 18:02:22.740326721 +0000 UTC m=+0.199687918 container init 353a24dce2208d1378162ba737eeea41fcea083eda46eb37c3bd77fc76c5378c (image=quay.io/ceph/ceph:v18, name=vibrant_tu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:02:22 compute-0 podman[192464]: 2025-12-03 18:02:22.751306396 +0000 UTC m=+0.210667573 container start 353a24dce2208d1378162ba737eeea41fcea083eda46eb37c3bd77fc76c5378c (image=quay.io/ceph/ceph:v18, name=vibrant_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:02:22 compute-0 podman[192464]: 2025-12-03 18:02:22.755948878 +0000 UTC m=+0.215310055 container attach 353a24dce2208d1378162ba737eeea41fcea083eda46eb37c3bd77fc76c5378c (image=quay.io/ceph/ceph:v18, name=vibrant_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:02:23 compute-0 ceph-mon[192463]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  3 18:02:23 compute-0 ceph-mon[192463]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2924773221' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:  cluster:
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:    id:     c1caf3ba-b2a5-5005-a11e-e955c344dccc
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:    health: HEALTH_OK
Dec  3 18:02:23 compute-0 vibrant_tu[192518]: 
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:  services:
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:    mon: 1 daemons, quorum compute-0 (age 0.62381s)
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:    mgr: no daemons active
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:    osd: 0 osds: 0 up, 0 in
Dec  3 18:02:23 compute-0 vibrant_tu[192518]: 
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:  data:
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:    pools:   0 pools, 0 pgs
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:    objects: 0 objects, 0 B
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:    usage:   0 B used, 0 B / 0 B avail
Dec  3 18:02:23 compute-0 vibrant_tu[192518]:    pgs:     
Dec  3 18:02:23 compute-0 vibrant_tu[192518]: 
Dec  3 18:02:23 compute-0 systemd[1]: libpod-353a24dce2208d1378162ba737eeea41fcea083eda46eb37c3bd77fc76c5378c.scope: Deactivated successfully.
Dec  3 18:02:23 compute-0 podman[192464]: 2025-12-03 18:02:23.237143397 +0000 UTC m=+0.696504564 container died 353a24dce2208d1378162ba737eeea41fcea083eda46eb37c3bd77fc76c5378c (image=quay.io/ceph/ceph:v18, name=vibrant_tu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0092a9f886aaf26d9b9ae0c6e74a0a034185b81f06a82be4e8710e948eb6d30a-merged.mount: Deactivated successfully.
Dec  3 18:02:23 compute-0 podman[192464]: 2025-12-03 18:02:23.35291369 +0000 UTC m=+0.812274867 container remove 353a24dce2208d1378162ba737eeea41fcea083eda46eb37c3bd77fc76c5378c (image=quay.io/ceph/ceph:v18, name=vibrant_tu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:02:23 compute-0 systemd[1]: libpod-conmon-353a24dce2208d1378162ba737eeea41fcea083eda46eb37c3bd77fc76c5378c.scope: Deactivated successfully.
Dec  3 18:02:23 compute-0 podman[192554]: 2025-12-03 18:02:23.430868711 +0000 UTC m=+0.053361509 container create 5ae9927a5c555f04a70f20d09f08c432a343dd4e44d311d30aebc018cea0d07b (image=quay.io/ceph/ceph:v18, name=trusting_jennings, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:02:23 compute-0 podman[192554]: 2025-12-03 18:02:23.409195068 +0000 UTC m=+0.031687916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:23 compute-0 systemd[1]: Started libpod-conmon-5ae9927a5c555f04a70f20d09f08c432a343dd4e44d311d30aebc018cea0d07b.scope.
Dec  3 18:02:23 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec089ba231789d8191d804d80893ed2eed2508541f6c12acaca1e344c38e0e9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec089ba231789d8191d804d80893ed2eed2508541f6c12acaca1e344c38e0e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec089ba231789d8191d804d80893ed2eed2508541f6c12acaca1e344c38e0e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec089ba231789d8191d804d80893ed2eed2508541f6c12acaca1e344c38e0e9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:23 compute-0 podman[192554]: 2025-12-03 18:02:23.616661612 +0000 UTC m=+0.239154430 container init 5ae9927a5c555f04a70f20d09f08c432a343dd4e44d311d30aebc018cea0d07b (image=quay.io/ceph/ceph:v18, name=trusting_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:23 compute-0 ceph-mon[192463]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 18:02:23 compute-0 podman[192554]: 2025-12-03 18:02:23.637146027 +0000 UTC m=+0.259638825 container start 5ae9927a5c555f04a70f20d09f08c432a343dd4e44d311d30aebc018cea0d07b (image=quay.io/ceph/ceph:v18, name=trusting_jennings, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:23 compute-0 podman[192554]: 2025-12-03 18:02:23.650020907 +0000 UTC m=+0.272513855 container attach 5ae9927a5c555f04a70f20d09f08c432a343dd4e44d311d30aebc018cea0d07b (image=quay.io/ceph/ceph:v18, name=trusting_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:24 compute-0 ceph-mon[192463]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  3 18:02:24 compute-0 ceph-mon[192463]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/289389644' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 18:02:24 compute-0 ceph-mon[192463]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/289389644' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  3 18:02:24 compute-0 trusting_jennings[192569]: 
Dec  3 18:02:24 compute-0 trusting_jennings[192569]: [global]
Dec  3 18:02:24 compute-0 trusting_jennings[192569]: #011fsid = c1caf3ba-b2a5-5005-a11e-e955c344dccc
Dec  3 18:02:24 compute-0 trusting_jennings[192569]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  3 18:02:24 compute-0 trusting_jennings[192569]: #011osd_crush_chooseleaf_type = 0
Dec  3 18:02:24 compute-0 systemd[1]: libpod-5ae9927a5c555f04a70f20d09f08c432a343dd4e44d311d30aebc018cea0d07b.scope: Deactivated successfully.
Dec  3 18:02:24 compute-0 podman[192554]: 2025-12-03 18:02:24.141408552 +0000 UTC m=+0.763901370 container died 5ae9927a5c555f04a70f20d09f08c432a343dd4e44d311d30aebc018cea0d07b (image=quay.io/ceph/ceph:v18, name=trusting_jennings, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:02:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ec089ba231789d8191d804d80893ed2eed2508541f6c12acaca1e344c38e0e9-merged.mount: Deactivated successfully.
Dec  3 18:02:24 compute-0 podman[192554]: 2025-12-03 18:02:24.192654579 +0000 UTC m=+0.815147377 container remove 5ae9927a5c555f04a70f20d09f08c432a343dd4e44d311d30aebc018cea0d07b (image=quay.io/ceph/ceph:v18, name=trusting_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 18:02:24 compute-0 systemd[1]: libpod-conmon-5ae9927a5c555f04a70f20d09f08c432a343dd4e44d311d30aebc018cea0d07b.scope: Deactivated successfully.
Dec  3 18:02:24 compute-0 podman[192609]: 2025-12-03 18:02:24.271767667 +0000 UTC m=+0.057358755 container create 310d76e22cfe9e8a8ae5fa7c09f3bc81f8c8dfcfe63840072ec66560bb256709 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:02:24 compute-0 systemd[1]: Started libpod-conmon-310d76e22cfe9e8a8ae5fa7c09f3bc81f8c8dfcfe63840072ec66560bb256709.scope.
Dec  3 18:02:24 compute-0 podman[192609]: 2025-12-03 18:02:24.241004495 +0000 UTC m=+0.026595643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f4ad5b26d7cff1b0ae703016af3cbbcffdf0b7ad5bf84ff8234965c460950f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f4ad5b26d7cff1b0ae703016af3cbbcffdf0b7ad5bf84ff8234965c460950f0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f4ad5b26d7cff1b0ae703016af3cbbcffdf0b7ad5bf84ff8234965c460950f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f4ad5b26d7cff1b0ae703016af3cbbcffdf0b7ad5bf84ff8234965c460950f0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:24 compute-0 podman[192609]: 2025-12-03 18:02:24.385618014 +0000 UTC m=+0.171209102 container init 310d76e22cfe9e8a8ae5fa7c09f3bc81f8c8dfcfe63840072ec66560bb256709 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:24 compute-0 podman[192609]: 2025-12-03 18:02:24.40532086 +0000 UTC m=+0.190911918 container start 310d76e22cfe9e8a8ae5fa7c09f3bc81f8c8dfcfe63840072ec66560bb256709 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:02:24 compute-0 podman[192609]: 2025-12-03 18:02:24.410521364 +0000 UTC m=+0.196112732 container attach 310d76e22cfe9e8a8ae5fa7c09f3bc81f8c8dfcfe63840072ec66560bb256709 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:02:24 compute-0 ceph-mon[192463]: from='client.? 192.168.122.100:0/289389644' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 18:02:24 compute-0 ceph-mon[192463]: from='client.? 192.168.122.100:0/289389644' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  3 18:02:24 compute-0 ceph-mon[192463]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:02:24 compute-0 ceph-mon[192463]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334374846' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:02:24 compute-0 systemd[1]: libpod-310d76e22cfe9e8a8ae5fa7c09f3bc81f8c8dfcfe63840072ec66560bb256709.scope: Deactivated successfully.
Dec  3 18:02:24 compute-0 podman[192650]: 2025-12-03 18:02:24.942648282 +0000 UTC m=+0.041427620 container died 310d76e22cfe9e8a8ae5fa7c09f3bc81f8c8dfcfe63840072ec66560bb256709 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:02:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f4ad5b26d7cff1b0ae703016af3cbbcffdf0b7ad5bf84ff8234965c460950f0-merged.mount: Deactivated successfully.
Dec  3 18:02:24 compute-0 podman[192650]: 2025-12-03 18:02:24.991566452 +0000 UTC m=+0.090345710 container remove 310d76e22cfe9e8a8ae5fa7c09f3bc81f8c8dfcfe63840072ec66560bb256709 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:02:24 compute-0 systemd[1]: libpod-conmon-310d76e22cfe9e8a8ae5fa7c09f3bc81f8c8dfcfe63840072ec66560bb256709.scope: Deactivated successfully.
Dec  3 18:02:25 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:02:25 compute-0 ceph-mon[192463]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  3 18:02:25 compute-0 ceph-mon[192463]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  3 18:02:25 compute-0 ceph-mon[192463]: mon.compute-0@0(leader) e1 shutdown
Dec  3 18:02:25 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0[192459]: 2025-12-03T18:02:25.277+0000 7f35c025e640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  3 18:02:25 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0[192459]: 2025-12-03T18:02:25.277+0000 7f35c025e640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  3 18:02:25 compute-0 ceph-mon[192463]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 18:02:25 compute-0 ceph-mon[192463]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 18:02:25 compute-0 podman[192693]: 2025-12-03 18:02:25.498661705 +0000 UTC m=+0.288963011 container died 3f962d09ac6994e4b5e22edd8278bda42eecdb955056978e9374cfa17fac20f5 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:02:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b4ce72116c1cdad135b848e4c9f75c2a06b0941f5f7d45147bfe867c01e2444-merged.mount: Deactivated successfully.
Dec  3 18:02:25 compute-0 podman[192693]: 2025-12-03 18:02:25.54070074 +0000 UTC m=+0.331002026 container remove 3f962d09ac6994e4b5e22edd8278bda42eecdb955056978e9374cfa17fac20f5 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:02:25 compute-0 bash[192693]: ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0
Dec  3 18:02:25 compute-0 systemd[1]: ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc@mon.compute-0.service: Deactivated successfully.
Dec  3 18:02:25 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:02:25 compute-0 systemd[1]: ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc@mon.compute-0.service: Consumed 1.505s CPU time.
Dec  3 18:02:25 compute-0 systemd[1]: Starting Ceph mon.compute-0 for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:02:26 compute-0 podman[192783]: 2025-12-03 18:02:26.067847087 +0000 UTC m=+0.055939660 container create c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a89df40699a464ce2647539b40585647b6081eb5da89ebcdb8623212bcf1bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a89df40699a464ce2647539b40585647b6081eb5da89ebcdb8623212bcf1bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a89df40699a464ce2647539b40585647b6081eb5da89ebcdb8623212bcf1bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a89df40699a464ce2647539b40585647b6081eb5da89ebcdb8623212bcf1bc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:26 compute-0 podman[192783]: 2025-12-03 18:02:26.047842355 +0000 UTC m=+0.035934948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:26 compute-0 podman[192783]: 2025-12-03 18:02:26.163014153 +0000 UTC m=+0.151106746 container init c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:26 compute-0 podman[192783]: 2025-12-03 18:02:26.184731467 +0000 UTC m=+0.172824040 container start c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:02:26 compute-0 bash[192783]: c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743
Dec  3 18:02:26 compute-0 systemd[1]: Started Ceph mon.compute-0 for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:02:26 compute-0 ceph-mon[192802]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 18:02:26 compute-0 ceph-mon[192802]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec  3 18:02:26 compute-0 ceph-mon[192802]: pidfile_write: ignore empty --pid-file
Dec  3 18:02:26 compute-0 ceph-mon[192802]: load: jerasure load: lrc 
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: RocksDB version: 7.9.2
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Git sha 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: DB SUMMARY
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: DB Session ID:  TYOLZSJOOVNJYKF8Y1CE
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: CURRENT file:  CURRENT
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54556 ; 
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                         Options.error_if_exists: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                       Options.create_if_missing: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                                     Options.env: 0x55910f3d5c40
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                                Options.info_log: 0x559110637040
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                              Options.statistics: (nil)
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                               Options.use_fsync: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                              Options.db_log_dir: 
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                                 Options.wal_dir: 
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                    Options.write_buffer_manager: 0x559110646b40
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                  Options.unordered_write: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                               Options.row_cache: None
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                              Options.wal_filter: None
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.two_write_queues: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.wal_compression: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.atomic_flush: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.max_background_jobs: 2
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.max_background_compactions: -1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.max_subcompactions: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.max_total_wal_size: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                          Options.max_open_files: -1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:       Options.compaction_readahead_size: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Compression algorithms supported:
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: #011kZSTD supported: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: #011kXpressCompression supported: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: #011kZlibCompression supported: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:           Options.merge_operator: 
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:        Options.compaction_filter: None
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559110636c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55911062f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:        Options.write_buffer_size: 33554432
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:  Options.max_write_buffer_number: 2
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:          Options.compression: NoCompression
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.num_levels: 7
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a1ac3b74-8599-4a51-8b4c-6fd35a134427
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764784946233546, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764784946241407, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54145, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 52687, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3023, "raw_average_key_size": 30, "raw_value_size": 50289, "raw_average_value_size": 502, "num_data_blocks": 8, "num_entries": 100, "num_filter_entries": 100, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784946, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764784946241593, "job": 1, "event": "recovery_finished"}
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559110658e00
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: DB pointer 0x5591106e2000
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   54.77 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Sum      2/0   54.77 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55911062f1f0#2 capacity: 512.00 MB usage: 25.89 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 18:02:26 compute-0 ceph-mon[192802]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@-1(???) e1 preinit fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@-1(???).mds e1 new map
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  3 18:02:26 compute-0 ceph-mon[192802]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 18:02:26 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 18:02:26 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 18:02:26 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : fsmap 
Dec  3 18:02:26 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  3 18:02:26 compute-0 podman[192803]: 2025-12-03 18:02:26.279581055 +0000 UTC m=+0.055087359 container create 5290b9fb4762aa7aa2d2097727fa5fd8cfde6a13149aa3575b311c9078096796 (image=quay.io/ceph/ceph:v18, name=nostalgic_bouman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:02:26 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  3 18:02:26 compute-0 systemd[1]: Started libpod-conmon-5290b9fb4762aa7aa2d2097727fa5fd8cfde6a13149aa3575b311c9078096796.scope.
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 18:02:26 compute-0 podman[192803]: 2025-12-03 18:02:26.259385698 +0000 UTC m=+0.034892013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c5c36913f46a929bab04989d09cf1e725b727800568718eba274bde5ee76677/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c5c36913f46a929bab04989d09cf1e725b727800568718eba274bde5ee76677/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c5c36913f46a929bab04989d09cf1e725b727800568718eba274bde5ee76677/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:26 compute-0 podman[192803]: 2025-12-03 18:02:26.422980155 +0000 UTC m=+0.198486559 container init 5290b9fb4762aa7aa2d2097727fa5fd8cfde6a13149aa3575b311c9078096796 (image=quay.io/ceph/ceph:v18, name=nostalgic_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:02:26 compute-0 podman[192803]: 2025-12-03 18:02:26.432814963 +0000 UTC m=+0.208321267 container start 5290b9fb4762aa7aa2d2097727fa5fd8cfde6a13149aa3575b311c9078096796 (image=quay.io/ceph/ceph:v18, name=nostalgic_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 18:02:26 compute-0 podman[192803]: 2025-12-03 18:02:26.438840088 +0000 UTC m=+0.214346402 container attach 5290b9fb4762aa7aa2d2097727fa5fd8cfde6a13149aa3575b311c9078096796 (image=quay.io/ceph/ceph:v18, name=nostalgic_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 18:02:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Dec  3 18:02:26 compute-0 systemd[1]: libpod-5290b9fb4762aa7aa2d2097727fa5fd8cfde6a13149aa3575b311c9078096796.scope: Deactivated successfully.
Dec  3 18:02:26 compute-0 conmon[192854]: conmon 5290b9fb4762aa7aa2d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5290b9fb4762aa7aa2d2097727fa5fd8cfde6a13149aa3575b311c9078096796.scope/container/memory.events
Dec  3 18:02:26 compute-0 podman[192803]: 2025-12-03 18:02:26.885435262 +0000 UTC m=+0.660941616 container died 5290b9fb4762aa7aa2d2097727fa5fd8cfde6a13149aa3575b311c9078096796 (image=quay.io/ceph/ceph:v18, name=nostalgic_bouman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:02:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c5c36913f46a929bab04989d09cf1e725b727800568718eba274bde5ee76677-merged.mount: Deactivated successfully.
Dec  3 18:02:26 compute-0 podman[192803]: 2025-12-03 18:02:26.988935529 +0000 UTC m=+0.764441833 container remove 5290b9fb4762aa7aa2d2097727fa5fd8cfde6a13149aa3575b311c9078096796 (image=quay.io/ceph/ceph:v18, name=nostalgic_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:27 compute-0 systemd[1]: libpod-conmon-5290b9fb4762aa7aa2d2097727fa5fd8cfde6a13149aa3575b311c9078096796.scope: Deactivated successfully.
Dec  3 18:02:27 compute-0 podman[192894]: 2025-12-03 18:02:27.075380325 +0000 UTC m=+0.058098404 container create fd2fbb4d2f5802b1f6839187cd495320fca1964bd17c8051819d191d4cd8e9bc (image=quay.io/ceph/ceph:v18, name=charming_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:02:27 compute-0 systemd[1]: Started libpod-conmon-fd2fbb4d2f5802b1f6839187cd495320fca1964bd17c8051819d191d4cd8e9bc.scope.
Dec  3 18:02:27 compute-0 podman[192894]: 2025-12-03 18:02:27.052174014 +0000 UTC m=+0.034892203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b97454eb5640518f888144ae44e3aa30bf761a7164c732629a0de907e3bc4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b97454eb5640518f888144ae44e3aa30bf761a7164c732629a0de907e3bc4b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b97454eb5640518f888144ae44e3aa30bf761a7164c732629a0de907e3bc4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:27 compute-0 podman[192894]: 2025-12-03 18:02:27.175677694 +0000 UTC m=+0.158395813 container init fd2fbb4d2f5802b1f6839187cd495320fca1964bd17c8051819d191d4cd8e9bc (image=quay.io/ceph/ceph:v18, name=charming_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:02:27 compute-0 podman[192894]: 2025-12-03 18:02:27.186997917 +0000 UTC m=+0.169716026 container start fd2fbb4d2f5802b1f6839187cd495320fca1964bd17c8051819d191d4cd8e9bc (image=quay.io/ceph/ceph:v18, name=charming_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:02:27 compute-0 podman[192894]: 2025-12-03 18:02:27.192660203 +0000 UTC m=+0.175378312 container attach fd2fbb4d2f5802b1f6839187cd495320fca1964bd17c8051819d191d4cd8e9bc (image=quay.io/ceph/ceph:v18, name=charming_lamport, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 18:02:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Dec  3 18:02:27 compute-0 systemd[1]: libpod-fd2fbb4d2f5802b1f6839187cd495320fca1964bd17c8051819d191d4cd8e9bc.scope: Deactivated successfully.
Dec  3 18:02:27 compute-0 podman[192894]: 2025-12-03 18:02:27.628744044 +0000 UTC m=+0.611462133 container died fd2fbb4d2f5802b1f6839187cd495320fca1964bd17c8051819d191d4cd8e9bc (image=quay.io/ceph/ceph:v18, name=charming_lamport, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:02:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-11b97454eb5640518f888144ae44e3aa30bf761a7164c732629a0de907e3bc4b-merged.mount: Deactivated successfully.
Dec  3 18:02:27 compute-0 podman[192894]: 2025-12-03 18:02:27.698070686 +0000 UTC m=+0.680788775 container remove fd2fbb4d2f5802b1f6839187cd495320fca1964bd17c8051819d191d4cd8e9bc (image=quay.io/ceph/ceph:v18, name=charming_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 18:02:27 compute-0 systemd[1]: libpod-conmon-fd2fbb4d2f5802b1f6839187cd495320fca1964bd17c8051819d191d4cd8e9bc.scope: Deactivated successfully.
Dec  3 18:02:27 compute-0 systemd[1]: Reloading.
Dec  3 18:02:27 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:02:27 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:02:28 compute-0 systemd[1]: Reloading.
Dec  3 18:02:28 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:02:28 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:02:28 compute-0 systemd[1]: Starting Ceph mgr.compute-0.etccde for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:02:29 compute-0 podman[193072]: 2025-12-03 18:02:29.003540752 +0000 UTC m=+0.091153780 container create 6854398d0c0725a945f311550ba9133cf35659a0e4a7ab4829b80cb6439c89f2 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:02:29 compute-0 podman[193072]: 2025-12-03 18:02:28.970630227 +0000 UTC m=+0.058243325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8465abf827f121a84d0c97ca70b8e28fd0e06bc43094169722126f4533779d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8465abf827f121a84d0c97ca70b8e28fd0e06bc43094169722126f4533779d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8465abf827f121a84d0c97ca70b8e28fd0e06bc43094169722126f4533779d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8465abf827f121a84d0c97ca70b8e28fd0e06bc43094169722126f4533779d1/merged/var/lib/ceph/mgr/ceph-compute-0.etccde supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:29 compute-0 podman[193072]: 2025-12-03 18:02:29.102782686 +0000 UTC m=+0.190395734 container init 6854398d0c0725a945f311550ba9133cf35659a0e4a7ab4829b80cb6439c89f2 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:02:29 compute-0 podman[193072]: 2025-12-03 18:02:29.138793944 +0000 UTC m=+0.226406962 container start 6854398d0c0725a945f311550ba9133cf35659a0e4a7ab4829b80cb6439c89f2 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:02:29 compute-0 bash[193072]: 6854398d0c0725a945f311550ba9133cf35659a0e4a7ab4829b80cb6439c89f2
Dec  3 18:02:29 compute-0 systemd[1]: Started Ceph mgr.compute-0.etccde for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:02:29 compute-0 ceph-mgr[193091]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 18:02:29 compute-0 ceph-mgr[193091]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  3 18:02:29 compute-0 ceph-mgr[193091]: pidfile_write: ignore empty --pid-file
Dec  3 18:02:29 compute-0 podman[193092]: 2025-12-03 18:02:29.247275492 +0000 UTC m=+0.063265317 container create f4bcbb14c43afcdf2d3423f2bf0dd486d66cd2ade8aec877efa7a66c9eb0f772 (image=quay.io/ceph/ceph:v18, name=beautiful_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 18:02:29 compute-0 podman[193092]: 2025-12-03 18:02:29.221403347 +0000 UTC m=+0.037393202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:29 compute-0 systemd[1]: Started libpod-conmon-f4bcbb14c43afcdf2d3423f2bf0dd486d66cd2ade8aec877efa7a66c9eb0f772.scope.
Dec  3 18:02:29 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'alerts'
Dec  3 18:02:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3060b9ebd9dbb12275552c9dadddd208408b115f43e5086e6c52e728e70a17/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3060b9ebd9dbb12275552c9dadddd208408b115f43e5086e6c52e728e70a17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3060b9ebd9dbb12275552c9dadddd208408b115f43e5086e6c52e728e70a17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:29 compute-0 podman[193092]: 2025-12-03 18:02:29.389348419 +0000 UTC m=+0.205338264 container init f4bcbb14c43afcdf2d3423f2bf0dd486d66cd2ade8aec877efa7a66c9eb0f772 (image=quay.io/ceph/ceph:v18, name=beautiful_beaver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:02:29 compute-0 podman[193092]: 2025-12-03 18:02:29.400805275 +0000 UTC m=+0.216795100 container start f4bcbb14c43afcdf2d3423f2bf0dd486d66cd2ade8aec877efa7a66c9eb0f772 (image=quay.io/ceph/ceph:v18, name=beautiful_beaver, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:29 compute-0 podman[193092]: 2025-12-03 18:02:29.435227466 +0000 UTC m=+0.251217301 container attach f4bcbb14c43afcdf2d3423f2bf0dd486d66cd2ade8aec877efa7a66c9eb0f772 (image=quay.io/ceph/ceph:v18, name=beautiful_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:02:29 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:29.616+0000 7fb691c2a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 18:02:29 compute-0 ceph-mgr[193091]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 18:02:29 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'balancer'
Dec  3 18:02:29 compute-0 podman[158200]: time="2025-12-03T18:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:02:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23488 "" "Go-http-client/1.1"
Dec  3 18:02:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4334 "" "Go-http-client/1.1"
Dec  3 18:02:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 18:02:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178916103' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]: 
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]: {
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "health": {
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "status": "HEALTH_OK",
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "checks": {},
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "mutes": []
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    },
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "election_epoch": 5,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "quorum": [
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        0
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    ],
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "quorum_names": [
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "compute-0"
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    ],
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "quorum_age": 3,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "monmap": {
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "epoch": 1,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "min_mon_release_name": "reef",
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "num_mons": 1
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    },
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "osdmap": {
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "epoch": 1,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "num_osds": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "num_up_osds": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "osd_up_since": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "num_in_osds": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "osd_in_since": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "num_remapped_pgs": 0
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    },
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "pgmap": {
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "pgs_by_state": [],
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "num_pgs": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "num_pools": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "num_objects": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "data_bytes": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "bytes_used": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "bytes_avail": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "bytes_total": 0
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    },
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "fsmap": {
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "epoch": 1,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "by_rank": [],
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "up:standby": 0
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    },
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "mgrmap": {
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "available": false,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "num_standbys": 0,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "modules": [
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:            "iostat",
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:            "nfs",
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:            "restful"
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        ],
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "services": {}
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    },
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "servicemap": {
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "epoch": 1,
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "modified": "2025-12-03T18:02:22.594284+0000",
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:        "services": {}
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    },
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]:    "progress_events": {}
Dec  3 18:02:29 compute-0 beautiful_beaver[193132]: }
Dec  3 18:02:29 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:29.869+0000 7fb691c2a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 18:02:29 compute-0 ceph-mgr[193091]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 18:02:29 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'cephadm'
Dec  3 18:02:29 compute-0 systemd[1]: libpod-f4bcbb14c43afcdf2d3423f2bf0dd486d66cd2ade8aec877efa7a66c9eb0f772.scope: Deactivated successfully.
Dec  3 18:02:29 compute-0 podman[193092]: 2025-12-03 18:02:29.893127733 +0000 UTC m=+0.709117568 container died f4bcbb14c43afcdf2d3423f2bf0dd486d66cd2ade8aec877efa7a66c9eb0f772 (image=quay.io/ceph/ceph:v18, name=beautiful_beaver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 18:02:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d3060b9ebd9dbb12275552c9dadddd208408b115f43e5086e6c52e728e70a17-merged.mount: Deactivated successfully.
Dec  3 18:02:29 compute-0 podman[193092]: 2025-12-03 18:02:29.952975407 +0000 UTC m=+0.768965232 container remove f4bcbb14c43afcdf2d3423f2bf0dd486d66cd2ade8aec877efa7a66c9eb0f772 (image=quay.io/ceph/ceph:v18, name=beautiful_beaver, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:02:29 compute-0 systemd[1]: libpod-conmon-f4bcbb14c43afcdf2d3423f2bf0dd486d66cd2ade8aec877efa7a66c9eb0f772.scope: Deactivated successfully.
Dec  3 18:02:30 compute-0 podman[193169]: 2025-12-03 18:02:30.962287207 +0000 UTC m=+0.121175575 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:02:31 compute-0 openstack_network_exporter[160319]: ERROR   18:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:02:31 compute-0 openstack_network_exporter[160319]: ERROR   18:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:02:31 compute-0 openstack_network_exporter[160319]: ERROR   18:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:02:31 compute-0 openstack_network_exporter[160319]: ERROR   18:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:02:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:02:31 compute-0 openstack_network_exporter[160319]: ERROR   18:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:02:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:02:31 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'crash'
Dec  3 18:02:32 compute-0 podman[193204]: 2025-12-03 18:02:32.106132062 +0000 UTC m=+0.120993961 container create db40670e7ff2ac31f51f668a4e165daf21c48f0796e497cf0b72de09d54db2ac (image=quay.io/ceph/ceph:v18, name=vigorous_wiles, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:02:32 compute-0 podman[193204]: 2025-12-03 18:02:32.021536611 +0000 UTC m=+0.036398320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:32 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:32.157+0000 7fb691c2a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  3 18:02:32 compute-0 ceph-mgr[193091]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  3 18:02:32 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'dashboard'
Dec  3 18:02:32 compute-0 systemd[1]: Started libpod-conmon-db40670e7ff2ac31f51f668a4e165daf21c48f0796e497cf0b72de09d54db2ac.scope.
Dec  3 18:02:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64fe4daa9233b4ddfd2625c3480fc999604683512d0dc1ba37511757a8288e37/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64fe4daa9233b4ddfd2625c3480fc999604683512d0dc1ba37511757a8288e37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64fe4daa9233b4ddfd2625c3480fc999604683512d0dc1ba37511757a8288e37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:32 compute-0 podman[193204]: 2025-12-03 18:02:32.259565083 +0000 UTC m=+0.274426862 container init db40670e7ff2ac31f51f668a4e165daf21c48f0796e497cf0b72de09d54db2ac (image=quay.io/ceph/ceph:v18, name=vigorous_wiles, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:02:32 compute-0 podman[193204]: 2025-12-03 18:02:32.285032957 +0000 UTC m=+0.299894686 container start db40670e7ff2ac31f51f668a4e165daf21c48f0796e497cf0b72de09d54db2ac (image=quay.io/ceph/ceph:v18, name=vigorous_wiles, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:02:32 compute-0 podman[193204]: 2025-12-03 18:02:32.291739199 +0000 UTC m=+0.306600948 container attach db40670e7ff2ac31f51f668a4e165daf21c48f0796e497cf0b72de09d54db2ac (image=quay.io/ceph/ceph:v18, name=vigorous_wiles, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 18:02:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 18:02:32 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443773383' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]: 
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]: {
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "health": {
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "status": "HEALTH_OK",
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "checks": {},
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "mutes": []
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    },
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "election_epoch": 5,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "quorum": [
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        0
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    ],
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "quorum_names": [
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "compute-0"
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    ],
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "quorum_age": 6,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "monmap": {
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "epoch": 1,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "min_mon_release_name": "reef",
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "num_mons": 1
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    },
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "osdmap": {
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "epoch": 1,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "num_osds": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "num_up_osds": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "osd_up_since": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "num_in_osds": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "osd_in_since": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "num_remapped_pgs": 0
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    },
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "pgmap": {
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "pgs_by_state": [],
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "num_pgs": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "num_pools": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "num_objects": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "data_bytes": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "bytes_used": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "bytes_avail": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "bytes_total": 0
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    },
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "fsmap": {
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "epoch": 1,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "by_rank": [],
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "up:standby": 0
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    },
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "mgrmap": {
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "available": false,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "num_standbys": 0,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "modules": [
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:            "iostat",
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:            "nfs",
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:            "restful"
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        ],
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "services": {}
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    },
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "servicemap": {
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "epoch": 1,
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "modified": "2025-12-03T18:02:22.594284+0000",
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:        "services": {}
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    },
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]:    "progress_events": {}
Dec  3 18:02:32 compute-0 vigorous_wiles[193220]: }
Dec  3 18:02:32 compute-0 systemd[1]: libpod-db40670e7ff2ac31f51f668a4e165daf21c48f0796e497cf0b72de09d54db2ac.scope: Deactivated successfully.
Dec  3 18:02:32 compute-0 podman[193204]: 2025-12-03 18:02:32.757566418 +0000 UTC m=+0.772428147 container died db40670e7ff2ac31f51f668a4e165daf21c48f0796e497cf0b72de09d54db2ac (image=quay.io/ceph/ceph:v18, name=vigorous_wiles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:02:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-64fe4daa9233b4ddfd2625c3480fc999604683512d0dc1ba37511757a8288e37-merged.mount: Deactivated successfully.
Dec  3 18:02:33 compute-0 podman[193204]: 2025-12-03 18:02:33.001029151 +0000 UTC m=+1.015890870 container remove db40670e7ff2ac31f51f668a4e165daf21c48f0796e497cf0b72de09d54db2ac (image=quay.io/ceph/ceph:v18, name=vigorous_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:02:33 compute-0 systemd[1]: libpod-conmon-db40670e7ff2ac31f51f668a4e165daf21c48f0796e497cf0b72de09d54db2ac.scope: Deactivated successfully.
Dec  3 18:02:33 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'devicehealth'
Dec  3 18:02:33 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:33.956+0000 7fb691c2a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  3 18:02:33 compute-0 ceph-mgr[193091]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  3 18:02:33 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'diskprediction_local'
Dec  3 18:02:34 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  3 18:02:34 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  3 18:02:34 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]:  from numpy import show_config as show_numpy_config
Dec  3 18:02:34 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:34.516+0000 7fb691c2a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  3 18:02:34 compute-0 ceph-mgr[193091]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  3 18:02:34 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'influx'
Dec  3 18:02:34 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:34.741+0000 7fb691c2a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  3 18:02:34 compute-0 ceph-mgr[193091]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  3 18:02:34 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'insights'
Dec  3 18:02:34 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'iostat'
Dec  3 18:02:35 compute-0 podman[193258]: 2025-12-03 18:02:35.126669215 +0000 UTC m=+0.081668144 container create 8cbd10df8314094e70eb89c3e7ef2b31b65b70c46781974f4aeace6806a5f77f (image=quay.io/ceph/ceph:v18, name=trusting_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:02:35 compute-0 podman[193258]: 2025-12-03 18:02:35.094620114 +0000 UTC m=+0.049619033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:35 compute-0 systemd[1]: Started libpod-conmon-8cbd10df8314094e70eb89c3e7ef2b31b65b70c46781974f4aeace6806a5f77f.scope.
Dec  3 18:02:35 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:35.219+0000 7fb691c2a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  3 18:02:35 compute-0 ceph-mgr[193091]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  3 18:02:35 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'k8sevents'
Dec  3 18:02:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0afe068faf963c1c5b7a1247c4fcab70a35755ae2473d7c78aa0f704539853b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0afe068faf963c1c5b7a1247c4fcab70a35755ae2473d7c78aa0f704539853b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0afe068faf963c1c5b7a1247c4fcab70a35755ae2473d7c78aa0f704539853b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:35 compute-0 podman[193258]: 2025-12-03 18:02:35.320192597 +0000 UTC m=+0.275191576 container init 8cbd10df8314094e70eb89c3e7ef2b31b65b70c46781974f4aeace6806a5f77f (image=quay.io/ceph/ceph:v18, name=trusting_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 18:02:35 compute-0 podman[193258]: 2025-12-03 18:02:35.340495565 +0000 UTC m=+0.295494464 container start 8cbd10df8314094e70eb89c3e7ef2b31b65b70c46781974f4aeace6806a5f77f (image=quay.io/ceph/ceph:v18, name=trusting_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:02:35 compute-0 podman[193258]: 2025-12-03 18:02:35.347156205 +0000 UTC m=+0.302155194 container attach 8cbd10df8314094e70eb89c3e7ef2b31b65b70c46781974f4aeace6806a5f77f (image=quay.io/ceph/ceph:v18, name=trusting_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:02:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 18:02:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3587602912' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]: 
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]: {
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "health": {
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "status": "HEALTH_OK",
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "checks": {},
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "mutes": []
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    },
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "election_epoch": 5,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "quorum": [
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        0
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    ],
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "quorum_names": [
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "compute-0"
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    ],
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "quorum_age": 9,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "monmap": {
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "epoch": 1,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "min_mon_release_name": "reef",
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "num_mons": 1
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    },
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "osdmap": {
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "epoch": 1,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "num_osds": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "num_up_osds": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "osd_up_since": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "num_in_osds": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "osd_in_since": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "num_remapped_pgs": 0
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    },
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "pgmap": {
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "pgs_by_state": [],
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "num_pgs": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "num_pools": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "num_objects": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "data_bytes": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "bytes_used": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "bytes_avail": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "bytes_total": 0
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    },
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "fsmap": {
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "epoch": 1,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "by_rank": [],
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "up:standby": 0
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    },
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "mgrmap": {
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "available": false,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "num_standbys": 0,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "modules": [
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:            "iostat",
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:            "nfs",
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:            "restful"
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        ],
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "services": {}
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    },
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "servicemap": {
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "epoch": 1,
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "modified": "2025-12-03T18:02:22.594284+0000",
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:        "services": {}
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    },
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]:    "progress_events": {}
Dec  3 18:02:35 compute-0 trusting_lamarr[193274]: }
Dec  3 18:02:35 compute-0 systemd[1]: libpod-8cbd10df8314094e70eb89c3e7ef2b31b65b70c46781974f4aeace6806a5f77f.scope: Deactivated successfully.
Dec  3 18:02:35 compute-0 podman[193258]: 2025-12-03 18:02:35.756478184 +0000 UTC m=+0.711477063 container died 8cbd10df8314094e70eb89c3e7ef2b31b65b70c46781974f4aeace6806a5f77f (image=quay.io/ceph/ceph:v18, name=trusting_lamarr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:02:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0afe068faf963c1c5b7a1247c4fcab70a35755ae2473d7c78aa0f704539853b-merged.mount: Deactivated successfully.
Dec  3 18:02:35 compute-0 podman[193258]: 2025-12-03 18:02:35.83200951 +0000 UTC m=+0.787008389 container remove 8cbd10df8314094e70eb89c3e7ef2b31b65b70c46781974f4aeace6806a5f77f (image=quay.io/ceph/ceph:v18, name=trusting_lamarr, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:02:35 compute-0 systemd[1]: libpod-conmon-8cbd10df8314094e70eb89c3e7ef2b31b65b70c46781974f4aeace6806a5f77f.scope: Deactivated successfully.
Dec  3 18:02:36 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'localpool'
Dec  3 18:02:37 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'mds_autoscaler'
Dec  3 18:02:37 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'mirroring'
Dec  3 18:02:37 compute-0 podman[193312]: 2025-12-03 18:02:37.9220376 +0000 UTC m=+0.053913507 container create be034cd12f8005f50729b61fad2d5f326407e7155b3624a384865aab533f9ad9 (image=quay.io/ceph/ceph:v18, name=nervous_beaver, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:02:37 compute-0 systemd[1]: Started libpod-conmon-be034cd12f8005f50729b61fad2d5f326407e7155b3624a384865aab533f9ad9.scope.
Dec  3 18:02:37 compute-0 podman[193312]: 2025-12-03 18:02:37.903354081 +0000 UTC m=+0.035229978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9275e28b026cfb4c73d0229efc54dc32e7c5417610e213c9be923541ea6fe9e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9275e28b026cfb4c73d0229efc54dc32e7c5417610e213c9be923541ea6fe9e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9275e28b026cfb4c73d0229efc54dc32e7c5417610e213c9be923541ea6fe9e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:38 compute-0 podman[193312]: 2025-12-03 18:02:38.024332138 +0000 UTC m=+0.156208055 container init be034cd12f8005f50729b61fad2d5f326407e7155b3624a384865aab533f9ad9 (image=quay.io/ceph/ceph:v18, name=nervous_beaver, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:02:38 compute-0 podman[193312]: 2025-12-03 18:02:38.037607188 +0000 UTC m=+0.169483065 container start be034cd12f8005f50729b61fad2d5f326407e7155b3624a384865aab533f9ad9 (image=quay.io/ceph/ceph:v18, name=nervous_beaver, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:02:38 compute-0 podman[193312]: 2025-12-03 18:02:38.042095386 +0000 UTC m=+0.173971283 container attach be034cd12f8005f50729b61fad2d5f326407e7155b3624a384865aab533f9ad9 (image=quay.io/ceph/ceph:v18, name=nervous_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 18:02:38 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'nfs'
Dec  3 18:02:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 18:02:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1475519873' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 18:02:38 compute-0 nervous_beaver[193327]: 
Dec  3 18:02:38 compute-0 nervous_beaver[193327]: {
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "health": {
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "status": "HEALTH_OK",
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "checks": {},
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "mutes": []
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    },
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "election_epoch": 5,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "quorum": [
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        0
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    ],
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "quorum_names": [
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "compute-0"
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    ],
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "quorum_age": 12,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "monmap": {
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "epoch": 1,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "min_mon_release_name": "reef",
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "num_mons": 1
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    },
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "osdmap": {
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "epoch": 1,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "num_osds": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "num_up_osds": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "osd_up_since": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "num_in_osds": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "osd_in_since": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "num_remapped_pgs": 0
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    },
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "pgmap": {
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "pgs_by_state": [],
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "num_pgs": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "num_pools": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "num_objects": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "data_bytes": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "bytes_used": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "bytes_avail": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "bytes_total": 0
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    },
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "fsmap": {
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "epoch": 1,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "by_rank": [],
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "up:standby": 0
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    },
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "mgrmap": {
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "available": false,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "num_standbys": 0,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "modules": [
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:            "iostat",
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:            "nfs",
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:            "restful"
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        ],
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "services": {}
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    },
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "servicemap": {
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "epoch": 1,
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "modified": "2025-12-03T18:02:22.594284+0000",
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:        "services": {}
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    },
Dec  3 18:02:38 compute-0 nervous_beaver[193327]:    "progress_events": {}
Dec  3 18:02:38 compute-0 nervous_beaver[193327]: }
Dec  3 18:02:38 compute-0 systemd[1]: libpod-be034cd12f8005f50729b61fad2d5f326407e7155b3624a384865aab533f9ad9.scope: Deactivated successfully.
Dec  3 18:02:38 compute-0 podman[193312]: 2025-12-03 18:02:38.454715705 +0000 UTC m=+0.586591592 container died be034cd12f8005f50729b61fad2d5f326407e7155b3624a384865aab533f9ad9 (image=quay.io/ceph/ceph:v18, name=nervous_beaver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:02:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9275e28b026cfb4c73d0229efc54dc32e7c5417610e213c9be923541ea6fe9e5-merged.mount: Deactivated successfully.
Dec  3 18:02:38 compute-0 podman[193312]: 2025-12-03 18:02:38.510516776 +0000 UTC m=+0.642392653 container remove be034cd12f8005f50729b61fad2d5f326407e7155b3624a384865aab533f9ad9 (image=quay.io/ceph/ceph:v18, name=nervous_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 18:02:38 compute-0 systemd[1]: libpod-conmon-be034cd12f8005f50729b61fad2d5f326407e7155b3624a384865aab533f9ad9.scope: Deactivated successfully.
Dec  3 18:02:38 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:38.851+0000 7fb691c2a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  3 18:02:38 compute-0 ceph-mgr[193091]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  3 18:02:38 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'orchestrator'
Dec  3 18:02:39 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:39.533+0000 7fb691c2a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  3 18:02:39 compute-0 ceph-mgr[193091]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  3 18:02:39 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'osd_perf_query'
Dec  3 18:02:39 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:39.824+0000 7fb691c2a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  3 18:02:39 compute-0 ceph-mgr[193091]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  3 18:02:39 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'osd_support'
Dec  3 18:02:40 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:40.053+0000 7fb691c2a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  3 18:02:40 compute-0 ceph-mgr[193091]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  3 18:02:40 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'pg_autoscaler'
Dec  3 18:02:40 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:40.334+0000 7fb691c2a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  3 18:02:40 compute-0 ceph-mgr[193091]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  3 18:02:40 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'progress'
Dec  3 18:02:40 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:40.584+0000 7fb691c2a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  3 18:02:40 compute-0 ceph-mgr[193091]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  3 18:02:40 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'prometheus'
Dec  3 18:02:40 compute-0 podman[193362]: 2025-12-03 18:02:40.595809262 +0000 UTC m=+0.052815491 container create d74389c88d83414ac05bde23221690ec723aca0bc5f4b10caa7febdc1ce25b1f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:02:40 compute-0 systemd[1]: Started libpod-conmon-d74389c88d83414ac05bde23221690ec723aca0bc5f4b10caa7febdc1ce25b1f.scope.
Dec  3 18:02:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007002f8aecfa0c266df51ad1f22628edc4a874f40d8d3fd2c90b74623ad86f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007002f8aecfa0c266df51ad1f22628edc4a874f40d8d3fd2c90b74623ad86f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007002f8aecfa0c266df51ad1f22628edc4a874f40d8d3fd2c90b74623ad86f3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:40 compute-0 podman[193362]: 2025-12-03 18:02:40.575249188 +0000 UTC m=+0.032255447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:40 compute-0 podman[193362]: 2025-12-03 18:02:40.693096461 +0000 UTC m=+0.150102710 container init d74389c88d83414ac05bde23221690ec723aca0bc5f4b10caa7febdc1ce25b1f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 18:02:40 compute-0 podman[193362]: 2025-12-03 18:02:40.701212516 +0000 UTC m=+0.158218745 container start d74389c88d83414ac05bde23221690ec723aca0bc5f4b10caa7febdc1ce25b1f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:02:40 compute-0 podman[193362]: 2025-12-03 18:02:40.705638562 +0000 UTC m=+0.162644841 container attach d74389c88d83414ac05bde23221690ec723aca0bc5f4b10caa7febdc1ce25b1f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 18:02:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 18:02:41 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769189000' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]: 
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]: {
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "health": {
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "status": "HEALTH_OK",
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "checks": {},
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "mutes": []
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    },
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "election_epoch": 5,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "quorum": [
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        0
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    ],
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "quorum_names": [
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "compute-0"
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    ],
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "quorum_age": 14,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "monmap": {
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "epoch": 1,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "min_mon_release_name": "reef",
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "num_mons": 1
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    },
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "osdmap": {
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "epoch": 1,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "num_osds": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "num_up_osds": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "osd_up_since": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "num_in_osds": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "osd_in_since": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "num_remapped_pgs": 0
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    },
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "pgmap": {
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "pgs_by_state": [],
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "num_pgs": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "num_pools": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "num_objects": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "data_bytes": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "bytes_used": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "bytes_avail": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "bytes_total": 0
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    },
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "fsmap": {
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "epoch": 1,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "by_rank": [],
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "up:standby": 0
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    },
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "mgrmap": {
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "available": false,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "num_standbys": 0,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "modules": [
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:            "iostat",
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:            "nfs",
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:            "restful"
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        ],
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "services": {}
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    },
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "servicemap": {
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "epoch": 1,
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "modified": "2025-12-03T18:02:22.594284+0000",
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:        "services": {}
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    },
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]:    "progress_events": {}
Dec  3 18:02:41 compute-0 hardcore_mirzakhani[193376]: }
Dec  3 18:02:41 compute-0 systemd[1]: libpod-d74389c88d83414ac05bde23221690ec723aca0bc5f4b10caa7febdc1ce25b1f.scope: Deactivated successfully.
Dec  3 18:02:41 compute-0 podman[193362]: 2025-12-03 18:02:41.155958737 +0000 UTC m=+0.612964966 container died d74389c88d83414ac05bde23221690ec723aca0bc5f4b10caa7febdc1ce25b1f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-007002f8aecfa0c266df51ad1f22628edc4a874f40d8d3fd2c90b74623ad86f3-merged.mount: Deactivated successfully.
Dec  3 18:02:41 compute-0 podman[193362]: 2025-12-03 18:02:41.310353148 +0000 UTC m=+0.767359387 container remove d74389c88d83414ac05bde23221690ec723aca0bc5f4b10caa7febdc1ce25b1f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 18:02:41 compute-0 systemd[1]: libpod-conmon-d74389c88d83414ac05bde23221690ec723aca0bc5f4b10caa7febdc1ce25b1f.scope: Deactivated successfully.
Dec  3 18:02:41 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:41.637+0000 7fb691c2a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  3 18:02:41 compute-0 ceph-mgr[193091]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  3 18:02:41 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'rbd_support'
Dec  3 18:02:41 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:41.947+0000 7fb691c2a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  3 18:02:41 compute-0 ceph-mgr[193091]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  3 18:02:41 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'restful'
Dec  3 18:02:42 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'rgw'
Dec  3 18:02:43 compute-0 podman[193419]: 2025-12-03 18:02:43.422552251 +0000 UTC m=+0.075045715 container create ef7755049d64093d5b183baf1074bfd89684eb8ea7e326f7267ee05b8ead0103 (image=quay.io/ceph/ceph:v18, name=goofy_carson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:02:43 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:43.473+0000 7fb691c2a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  3 18:02:43 compute-0 ceph-mgr[193091]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  3 18:02:43 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'rook'
Dec  3 18:02:43 compute-0 systemd[1]: Started libpod-conmon-ef7755049d64093d5b183baf1074bfd89684eb8ea7e326f7267ee05b8ead0103.scope.
Dec  3 18:02:43 compute-0 podman[193419]: 2025-12-03 18:02:43.394394725 +0000 UTC m=+0.046888279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a1284baa03959ca63cb8302888a7261a4569775ddb6332de6bd6119496ee9a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a1284baa03959ca63cb8302888a7261a4569775ddb6332de6bd6119496ee9a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a1284baa03959ca63cb8302888a7261a4569775ddb6332de6bd6119496ee9a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:43 compute-0 podman[193419]: 2025-12-03 18:02:43.568110131 +0000 UTC m=+0.220603605 container init ef7755049d64093d5b183baf1074bfd89684eb8ea7e326f7267ee05b8ead0103 (image=quay.io/ceph/ceph:v18, name=goofy_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:02:43 compute-0 podman[193419]: 2025-12-03 18:02:43.575234492 +0000 UTC m=+0.227727996 container start ef7755049d64093d5b183baf1074bfd89684eb8ea7e326f7267ee05b8ead0103 (image=quay.io/ceph/ceph:v18, name=goofy_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec  3 18:02:43 compute-0 podman[193419]: 2025-12-03 18:02:43.582934736 +0000 UTC m=+0.235428200 container attach ef7755049d64093d5b183baf1074bfd89684eb8ea7e326f7267ee05b8ead0103 (image=quay.io/ceph/ceph:v18, name=goofy_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 18:02:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 18:02:43 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/597625058' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 18:02:43 compute-0 goofy_carson[193435]: 
Dec  3 18:02:43 compute-0 goofy_carson[193435]: {
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "health": {
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "status": "HEALTH_OK",
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "checks": {},
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "mutes": []
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    },
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "election_epoch": 5,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "quorum": [
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        0
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    ],
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "quorum_names": [
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "compute-0"
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    ],
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "quorum_age": 17,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "monmap": {
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "epoch": 1,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "min_mon_release_name": "reef",
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "num_mons": 1
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    },
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "osdmap": {
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "epoch": 1,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "num_osds": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "num_up_osds": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "osd_up_since": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "num_in_osds": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "osd_in_since": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "num_remapped_pgs": 0
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    },
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "pgmap": {
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "pgs_by_state": [],
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "num_pgs": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "num_pools": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "num_objects": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "data_bytes": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "bytes_used": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "bytes_avail": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "bytes_total": 0
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    },
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "fsmap": {
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "epoch": 1,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "by_rank": [],
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "up:standby": 0
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    },
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "mgrmap": {
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "available": false,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "num_standbys": 0,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "modules": [
Dec  3 18:02:43 compute-0 goofy_carson[193435]:            "iostat",
Dec  3 18:02:43 compute-0 goofy_carson[193435]:            "nfs",
Dec  3 18:02:43 compute-0 goofy_carson[193435]:            "restful"
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        ],
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "services": {}
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    },
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "servicemap": {
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "epoch": 1,
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "modified": "2025-12-03T18:02:22.594284+0000",
Dec  3 18:02:43 compute-0 goofy_carson[193435]:        "services": {}
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    },
Dec  3 18:02:43 compute-0 goofy_carson[193435]:    "progress_events": {}
Dec  3 18:02:43 compute-0 goofy_carson[193435]: }
Dec  3 18:02:44 compute-0 systemd[1]: libpod-ef7755049d64093d5b183baf1074bfd89684eb8ea7e326f7267ee05b8ead0103.scope: Deactivated successfully.
Dec  3 18:02:44 compute-0 podman[193419]: 2025-12-03 18:02:44.01113841 +0000 UTC m=+0.663631874 container died ef7755049d64093d5b183baf1074bfd89684eb8ea7e326f7267ee05b8ead0103 (image=quay.io/ceph/ceph:v18, name=goofy_carson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a1284baa03959ca63cb8302888a7261a4569775ddb6332de6bd6119496ee9a2-merged.mount: Deactivated successfully.
Dec  3 18:02:45 compute-0 podman[193419]: 2025-12-03 18:02:45.599186104 +0000 UTC m=+2.251679578 container remove ef7755049d64093d5b183baf1074bfd89684eb8ea7e326f7267ee05b8ead0103 (image=quay.io/ceph/ceph:v18, name=goofy_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 18:02:45 compute-0 systemd[1]: libpod-conmon-ef7755049d64093d5b183baf1074bfd89684eb8ea7e326f7267ee05b8ead0103.scope: Deactivated successfully.
Dec  3 18:02:45 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:45.670+0000 7fb691c2a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  3 18:02:45 compute-0 ceph-mgr[193091]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  3 18:02:45 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'selftest'
Dec  3 18:02:45 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:45.932+0000 7fb691c2a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  3 18:02:45 compute-0 ceph-mgr[193091]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  3 18:02:45 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'snap_schedule'
Dec  3 18:02:46 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:46.167+0000 7fb691c2a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  3 18:02:46 compute-0 ceph-mgr[193091]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  3 18:02:46 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'stats'
Dec  3 18:02:46 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'status'
Dec  3 18:02:46 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:46.656+0000 7fb691c2a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  3 18:02:46 compute-0 ceph-mgr[193091]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  3 18:02:46 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'telegraf'
Dec  3 18:02:46 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:46.915+0000 7fb691c2a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  3 18:02:46 compute-0 ceph-mgr[193091]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  3 18:02:46 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'telemetry'
Dec  3 18:02:46 compute-0 podman[193480]: 2025-12-03 18:02:46.936659124 +0000 UTC m=+0.086552832 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, container_name=kepler, vcs-type=git, architecture=x86_64, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 18:02:46 compute-0 podman[193473]: 2025-12-03 18:02:46.95148925 +0000 UTC m=+0.115770793 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, architecture=x86_64, io.buildah.version=1.33.7, distribution-scope=public, version=9.6, vendor=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Dec  3 18:02:46 compute-0 podman[193475]: 2025-12-03 18:02:46.962507075 +0000 UTC m=+0.110659491 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 18:02:46 compute-0 podman[193474]: 2025-12-03 18:02:46.962630648 +0000 UTC m=+0.120524178 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  3 18:02:46 compute-0 podman[193472]: 2025-12-03 18:02:46.973314725 +0000 UTC m=+0.138590052 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  3 18:02:47 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:47.534+0000 7fb691c2a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  3 18:02:47 compute-0 ceph-mgr[193091]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  3 18:02:47 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'test_orchestrator'
Dec  3 18:02:47 compute-0 podman[193570]: 2025-12-03 18:02:47.726820198 +0000 UTC m=+0.086446130 container create e3f966b7af530284e8f1150428f42386492922a81cb60504ac5ff9afdd4677cb (image=quay.io/ceph/ceph:v18, name=crazy_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:47 compute-0 podman[193570]: 2025-12-03 18:02:47.695111675 +0000 UTC m=+0.054737647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:47 compute-0 systemd[1]: Started libpod-conmon-e3f966b7af530284e8f1150428f42386492922a81cb60504ac5ff9afdd4677cb.scope.
Dec  3 18:02:47 compute-0 podman[193584]: 2025-12-03 18:02:47.876370183 +0000 UTC m=+0.085016735 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:02:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d547790021e30d21d377d5e577503c1f810d3da0dd41a61507450cf0978fe705/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d547790021e30d21d377d5e577503c1f810d3da0dd41a61507450cf0978fe705/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d547790021e30d21d377d5e577503c1f810d3da0dd41a61507450cf0978fe705/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:47 compute-0 podman[193570]: 2025-12-03 18:02:47.927158743 +0000 UTC m=+0.286784725 container init e3f966b7af530284e8f1150428f42386492922a81cb60504ac5ff9afdd4677cb (image=quay.io/ceph/ceph:v18, name=crazy_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:02:47 compute-0 podman[193570]: 2025-12-03 18:02:47.940495914 +0000 UTC m=+0.300121906 container start e3f966b7af530284e8f1150428f42386492922a81cb60504ac5ff9afdd4677cb (image=quay.io/ceph/ceph:v18, name=crazy_swirles, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:47 compute-0 podman[193570]: 2025-12-03 18:02:47.946511699 +0000 UTC m=+0.306137691 container attach e3f966b7af530284e8f1150428f42386492922a81cb60504ac5ff9afdd4677cb (image=quay.io/ceph/ceph:v18, name=crazy_swirles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:02:48 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:48.226+0000 7fb691c2a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  3 18:02:48 compute-0 ceph-mgr[193091]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  3 18:02:48 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'volumes'
Dec  3 18:02:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 18:02:48 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3119661987' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 18:02:48 compute-0 crazy_swirles[193598]: 
Dec  3 18:02:48 compute-0 crazy_swirles[193598]: {
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "health": {
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "status": "HEALTH_OK",
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "checks": {},
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "mutes": []
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    },
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "election_epoch": 5,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "quorum": [
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        0
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    ],
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "quorum_names": [
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "compute-0"
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    ],
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "quorum_age": 22,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "monmap": {
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "epoch": 1,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "min_mon_release_name": "reef",
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "num_mons": 1
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    },
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "osdmap": {
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "epoch": 1,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "num_osds": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "num_up_osds": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "osd_up_since": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "num_in_osds": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "osd_in_since": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "num_remapped_pgs": 0
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    },
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "pgmap": {
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "pgs_by_state": [],
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "num_pgs": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "num_pools": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "num_objects": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "data_bytes": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "bytes_used": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "bytes_avail": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "bytes_total": 0
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    },
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "fsmap": {
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "epoch": 1,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "by_rank": [],
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "up:standby": 0
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    },
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "mgrmap": {
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "available": false,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "num_standbys": 0,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "modules": [
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:            "iostat",
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:            "nfs",
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:            "restful"
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        ],
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "services": {}
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    },
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "servicemap": {
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "epoch": 1,
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "modified": "2025-12-03T18:02:22.594284+0000",
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:        "services": {}
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    },
Dec  3 18:02:48 compute-0 crazy_swirles[193598]:    "progress_events": {}
Dec  3 18:02:48 compute-0 crazy_swirles[193598]: }
Dec  3 18:02:48 compute-0 systemd[1]: libpod-e3f966b7af530284e8f1150428f42386492922a81cb60504ac5ff9afdd4677cb.scope: Deactivated successfully.
Dec  3 18:02:48 compute-0 podman[193570]: 2025-12-03 18:02:48.361825372 +0000 UTC m=+0.721451374 container died e3f966b7af530284e8f1150428f42386492922a81cb60504ac5ff9afdd4677cb (image=quay.io/ceph/ceph:v18, name=crazy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:02:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d547790021e30d21d377d5e577503c1f810d3da0dd41a61507450cf0978fe705-merged.mount: Deactivated successfully.
Dec  3 18:02:48 compute-0 podman[193570]: 2025-12-03 18:02:48.464120541 +0000 UTC m=+0.823746483 container remove e3f966b7af530284e8f1150428f42386492922a81cb60504ac5ff9afdd4677cb (image=quay.io/ceph/ceph:v18, name=crazy_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:02:48 compute-0 systemd[1]: libpod-conmon-e3f966b7af530284e8f1150428f42386492922a81cb60504ac5ff9afdd4677cb.scope: Deactivated successfully.
Dec  3 18:02:48 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:48.905+0000 7fb691c2a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  3 18:02:48 compute-0 ceph-mgr[193091]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  3 18:02:48 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'zabbix'
Dec  3 18:02:49 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:49.124+0000 7fb691c2a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: ms_deliver_dispatch: unhandled message 0x55d1e0d251e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.etccde
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr handle_mgr_map Activating!
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr handle_mgr_map I am now activating
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.etccde(active, starting, since 0.0180794s)
Dec  3 18:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  3 18:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e1 all = 1
Dec  3 18:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  3 18:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  3 18:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  3 18:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.etccde", "id": "compute-0.etccde"} v 0) v1
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mgr metadata", "who": "compute-0.etccde", "id": "compute-0.etccde"}]: dispatch
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: balancer
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: crash
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Manager daemon compute-0.etccde is now available
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [balancer INFO root] Starting
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:02:49
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: devicehealth
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [balancer INFO root] No pools available
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: iostat
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [devicehealth INFO root] Starting
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: nfs
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: orchestrator
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: pg_autoscaler
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: progress
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [progress INFO root] Loading...
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [progress INFO root] No stored events to load
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [progress INFO root] Loaded [] historic events
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [progress INFO root] Loaded OSDMap, ready.
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [rbd_support INFO root] recovery thread starting
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [rbd_support INFO root] starting setup
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: rbd_support
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: restful
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [restful INFO root] server_addr: :: server_port: 8003
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [restful WARNING root] server not running: no certificate configured
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: status
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/mirror_snapshot_schedule"} v 0) v1
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/mirror_snapshot_schedule"}]: dispatch
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: telemetry
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [rbd_support INFO root] PerfHandler: starting
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TaskHandler: starting
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' 
Dec  3 18:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/trash_purge_schedule"} v 0) v1
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/trash_purge_schedule"}]: dispatch
Dec  3 18:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: [rbd_support INFO root] setup complete
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' 
Dec  3 18:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Dec  3 18:02:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' 
Dec  3 18:02:49 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: volumes
Dec  3 18:02:49 compute-0 ceph-mon[192802]: Activating manager daemon compute-0.etccde
Dec  3 18:02:49 compute-0 ceph-mon[192802]: Manager daemon compute-0.etccde is now available
Dec  3 18:02:49 compute-0 ceph-mon[192802]: from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/mirror_snapshot_schedule"}]: dispatch
Dec  3 18:02:49 compute-0 ceph-mon[192802]: from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' 
Dec  3 18:02:49 compute-0 ceph-mon[192802]: from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/trash_purge_schedule"}]: dispatch
Dec  3 18:02:49 compute-0 ceph-mon[192802]: from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' 
Dec  3 18:02:49 compute-0 ceph-mon[192802]: from='mgr.14102 192.168.122.100:0/3763998011' entity='mgr.compute-0.etccde' 
Dec  3 18:02:50 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.etccde(active, since 1.2641s)
Dec  3 18:02:50 compute-0 podman[193725]: 2025-12-03 18:02:50.598046427 +0000 UTC m=+0.076256455 container create bb154130c8e6e359c087a88e55f58eec35f3a944e7c5f37c763b03a1e233657a (image=quay.io/ceph/ceph:v18, name=crazy_wright, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 18:02:50 compute-0 systemd[1]: Started libpod-conmon-bb154130c8e6e359c087a88e55f58eec35f3a944e7c5f37c763b03a1e233657a.scope.
Dec  3 18:02:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:50 compute-0 podman[193725]: 2025-12-03 18:02:50.576367455 +0000 UTC m=+0.054577533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91dd1126e2dd91b41deb538c42398ed80d23856463627b7f17d1d0e64ba8fc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91dd1126e2dd91b41deb538c42398ed80d23856463627b7f17d1d0e64ba8fc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91dd1126e2dd91b41deb538c42398ed80d23856463627b7f17d1d0e64ba8fc2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:50 compute-0 podman[193725]: 2025-12-03 18:02:50.696199666 +0000 UTC m=+0.174409714 container init bb154130c8e6e359c087a88e55f58eec35f3a944e7c5f37c763b03a1e233657a (image=quay.io/ceph/ceph:v18, name=crazy_wright, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:02:50 compute-0 podman[193725]: 2025-12-03 18:02:50.706629206 +0000 UTC m=+0.184839224 container start bb154130c8e6e359c087a88e55f58eec35f3a944e7c5f37c763b03a1e233657a (image=quay.io/ceph/ceph:v18, name=crazy_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:02:50 compute-0 podman[193725]: 2025-12-03 18:02:50.711595916 +0000 UTC m=+0.189805954 container attach bb154130c8e6e359c087a88e55f58eec35f3a944e7c5f37c763b03a1e233657a (image=quay.io/ceph/ceph:v18, name=crazy_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:02:51 compute-0 ceph-mgr[193091]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 18:02:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 18:02:51 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081669945' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 18:02:51 compute-0 crazy_wright[193741]: 
Dec  3 18:02:51 compute-0 crazy_wright[193741]: {
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "health": {
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "status": "HEALTH_OK",
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "checks": {},
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "mutes": []
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    },
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "election_epoch": 5,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "quorum": [
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        0
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    ],
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "quorum_names": [
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "compute-0"
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    ],
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "quorum_age": 25,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "monmap": {
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "epoch": 1,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "min_mon_release_name": "reef",
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "num_mons": 1
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    },
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "osdmap": {
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "epoch": 1,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "num_osds": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "num_up_osds": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "osd_up_since": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "num_in_osds": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "osd_in_since": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "num_remapped_pgs": 0
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    },
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "pgmap": {
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "pgs_by_state": [],
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "num_pgs": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "num_pools": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "num_objects": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "data_bytes": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "bytes_used": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "bytes_avail": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "bytes_total": 0
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    },
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "fsmap": {
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "epoch": 1,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "by_rank": [],
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "up:standby": 0
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    },
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "mgrmap": {
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "available": true,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "num_standbys": 0,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "modules": [
Dec  3 18:02:51 compute-0 crazy_wright[193741]:            "iostat",
Dec  3 18:02:51 compute-0 crazy_wright[193741]:            "nfs",
Dec  3 18:02:51 compute-0 crazy_wright[193741]:            "restful"
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        ],
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "services": {}
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    },
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "servicemap": {
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "epoch": 1,
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "modified": "2025-12-03T18:02:22.594284+0000",
Dec  3 18:02:51 compute-0 crazy_wright[193741]:        "services": {}
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    },
Dec  3 18:02:51 compute-0 crazy_wright[193741]:    "progress_events": {}
Dec  3 18:02:51 compute-0 crazy_wright[193741]: }
Dec  3 18:02:51 compute-0 systemd[1]: libpod-bb154130c8e6e359c087a88e55f58eec35f3a944e7c5f37c763b03a1e233657a.scope: Deactivated successfully.
Dec  3 18:02:51 compute-0 podman[193725]: 2025-12-03 18:02:51.37903206 +0000 UTC m=+0.857242088 container died bb154130c8e6e359c087a88e55f58eec35f3a944e7c5f37c763b03a1e233657a (image=quay.io/ceph/ceph:v18, name=crazy_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:02:51 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.etccde(active, since 2s)
Dec  3 18:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a91dd1126e2dd91b41deb538c42398ed80d23856463627b7f17d1d0e64ba8fc2-merged.mount: Deactivated successfully.
Dec  3 18:02:51 compute-0 podman[193725]: 2025-12-03 18:02:51.466250396 +0000 UTC m=+0.944460444 container remove bb154130c8e6e359c087a88e55f58eec35f3a944e7c5f37c763b03a1e233657a (image=quay.io/ceph/ceph:v18, name=crazy_wright, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:51 compute-0 systemd[1]: libpod-conmon-bb154130c8e6e359c087a88e55f58eec35f3a944e7c5f37c763b03a1e233657a.scope: Deactivated successfully.
Dec  3 18:02:51 compute-0 podman[193778]: 2025-12-03 18:02:51.569020086 +0000 UTC m=+0.073722672 container create 5d4b0439305a666334496ec4f6a62c7fa9c4f8e55f3614c0e3d5e81b8bbbc5d7 (image=quay.io/ceph/ceph:v18, name=silly_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:02:51 compute-0 systemd[1]: Started libpod-conmon-5d4b0439305a666334496ec4f6a62c7fa9c4f8e55f3614c0e3d5e81b8bbbc5d7.scope.
Dec  3 18:02:51 compute-0 podman[193778]: 2025-12-03 18:02:51.534318212 +0000 UTC m=+0.039020898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0bcddbfbedf5c5c75c0ba4d7f0a8885ef6d07bceac0093ca05e0fdd8ad3d00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0bcddbfbedf5c5c75c0ba4d7f0a8885ef6d07bceac0093ca05e0fdd8ad3d00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0bcddbfbedf5c5c75c0ba4d7f0a8885ef6d07bceac0093ca05e0fdd8ad3d00/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0bcddbfbedf5c5c75c0ba4d7f0a8885ef6d07bceac0093ca05e0fdd8ad3d00/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:51 compute-0 podman[193778]: 2025-12-03 18:02:51.691019879 +0000 UTC m=+0.195722505 container init 5d4b0439305a666334496ec4f6a62c7fa9c4f8e55f3614c0e3d5e81b8bbbc5d7 (image=quay.io/ceph/ceph:v18, name=silly_mclaren, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:02:51 compute-0 podman[193778]: 2025-12-03 18:02:51.72310915 +0000 UTC m=+0.227811776 container start 5d4b0439305a666334496ec4f6a62c7fa9c4f8e55f3614c0e3d5e81b8bbbc5d7 (image=quay.io/ceph/ceph:v18, name=silly_mclaren, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:02:51 compute-0 podman[193778]: 2025-12-03 18:02:51.730347874 +0000 UTC m=+0.235050550 container attach 5d4b0439305a666334496ec4f6a62c7fa9c4f8e55f3614c0e3d5e81b8bbbc5d7 (image=quay.io/ceph/ceph:v18, name=silly_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:02:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  3 18:02:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3209676313' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 18:02:52 compute-0 systemd[1]: libpod-5d4b0439305a666334496ec4f6a62c7fa9c4f8e55f3614c0e3d5e81b8bbbc5d7.scope: Deactivated successfully.
Dec  3 18:02:52 compute-0 podman[193778]: 2025-12-03 18:02:52.331104846 +0000 UTC m=+0.835807442 container died 5d4b0439305a666334496ec4f6a62c7fa9c4f8e55f3614c0e3d5e81b8bbbc5d7 (image=quay.io/ceph/ceph:v18, name=silly_mclaren, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:02:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a0bcddbfbedf5c5c75c0ba4d7f0a8885ef6d07bceac0093ca05e0fdd8ad3d00-merged.mount: Deactivated successfully.
Dec  3 18:02:52 compute-0 podman[193778]: 2025-12-03 18:02:52.390730529 +0000 UTC m=+0.895433135 container remove 5d4b0439305a666334496ec4f6a62c7fa9c4f8e55f3614c0e3d5e81b8bbbc5d7 (image=quay.io/ceph/ceph:v18, name=silly_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:02:52 compute-0 systemd[1]: libpod-conmon-5d4b0439305a666334496ec4f6a62c7fa9c4f8e55f3614c0e3d5e81b8bbbc5d7.scope: Deactivated successfully.
Dec  3 18:02:52 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3209676313' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 18:02:52 compute-0 podman[193831]: 2025-12-03 18:02:52.496242736 +0000 UTC m=+0.073917528 container create 44e678db8837016deb01a53244764a459abf2414a51af996ec6452077e804e04 (image=quay.io/ceph/ceph:v18, name=vigorous_napier, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:02:52 compute-0 podman[193831]: 2025-12-03 18:02:52.465711842 +0000 UTC m=+0.043386714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:52 compute-0 systemd[1]: Started libpod-conmon-44e678db8837016deb01a53244764a459abf2414a51af996ec6452077e804e04.scope.
Dec  3 18:02:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd5418e748f8236d4a2dd6524fb351a05862396049496a76bd4c6c3b7f88709/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd5418e748f8236d4a2dd6524fb351a05862396049496a76bd4c6c3b7f88709/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd5418e748f8236d4a2dd6524fb351a05862396049496a76bd4c6c3b7f88709/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:52 compute-0 podman[193831]: 2025-12-03 18:02:52.677343929 +0000 UTC m=+0.255018821 container init 44e678db8837016deb01a53244764a459abf2414a51af996ec6452077e804e04 (image=quay.io/ceph/ceph:v18, name=vigorous_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:02:52 compute-0 podman[193831]: 2025-12-03 18:02:52.686357525 +0000 UTC m=+0.264032347 container start 44e678db8837016deb01a53244764a459abf2414a51af996ec6452077e804e04 (image=quay.io/ceph/ceph:v18, name=vigorous_napier, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:02:52 compute-0 podman[193831]: 2025-12-03 18:02:52.694147853 +0000 UTC m=+0.271822645 container attach 44e678db8837016deb01a53244764a459abf2414a51af996ec6452077e804e04 (image=quay.io/ceph/ceph:v18, name=vigorous_napier, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:02:53 compute-0 ceph-mgr[193091]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 18:02:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Dec  3 18:02:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3162284787' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  3 18:02:53 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3162284787' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  3 18:02:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3162284787' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  3 18:02:53 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.etccde(active, since 4s)
Dec  3 18:02:53 compute-0 systemd[1]: libpod-44e678db8837016deb01a53244764a459abf2414a51af996ec6452077e804e04.scope: Deactivated successfully.
Dec  3 18:02:53 compute-0 podman[193831]: 2025-12-03 18:02:53.473078286 +0000 UTC m=+1.050753088 container died 44e678db8837016deb01a53244764a459abf2414a51af996ec6452077e804e04 (image=quay.io/ceph/ceph:v18, name=vigorous_napier, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:02:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cd5418e748f8236d4a2dd6524fb351a05862396049496a76bd4c6c3b7f88709-merged.mount: Deactivated successfully.
Dec  3 18:02:53 compute-0 podman[193831]: 2025-12-03 18:02:53.536783718 +0000 UTC m=+1.114458490 container remove 44e678db8837016deb01a53244764a459abf2414a51af996ec6452077e804e04 (image=quay.io/ceph/ceph:v18, name=vigorous_napier, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:02:53 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: ignoring --setuser ceph since I am not root
Dec  3 18:02:53 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: ignoring --setgroup ceph since I am not root
Dec  3 18:02:53 compute-0 systemd[1]: libpod-conmon-44e678db8837016deb01a53244764a459abf2414a51af996ec6452077e804e04.scope: Deactivated successfully.
Dec  3 18:02:53 compute-0 ceph-mgr[193091]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  3 18:02:53 compute-0 ceph-mgr[193091]: pidfile_write: ignore empty --pid-file
Dec  3 18:02:53 compute-0 podman[193891]: 2025-12-03 18:02:53.612692533 +0000 UTC m=+0.054420640 container create 99e070bb469c91f63349a4cf4e27165ec8bf75eab999c4c4f5b9cd6995ff253a (image=quay.io/ceph/ceph:v18, name=optimistic_colden, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:02:53 compute-0 systemd[1]: Started libpod-conmon-99e070bb469c91f63349a4cf4e27165ec8bf75eab999c4c4f5b9cd6995ff253a.scope.
Dec  3 18:02:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914d991ce59f6bc327f10655dab32e837285e0b5f44fe7550adcee6818c04191/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914d991ce59f6bc327f10655dab32e837285e0b5f44fe7550adcee6818c04191/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914d991ce59f6bc327f10655dab32e837285e0b5f44fe7550adcee6818c04191/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:53 compute-0 podman[193891]: 2025-12-03 18:02:53.593938291 +0000 UTC m=+0.035666428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:53 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'alerts'
Dec  3 18:02:53 compute-0 podman[193891]: 2025-12-03 18:02:53.71369614 +0000 UTC m=+0.155424297 container init 99e070bb469c91f63349a4cf4e27165ec8bf75eab999c4c4f5b9cd6995ff253a (image=quay.io/ceph/ceph:v18, name=optimistic_colden, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:02:53 compute-0 podman[193891]: 2025-12-03 18:02:53.72076079 +0000 UTC m=+0.162488897 container start 99e070bb469c91f63349a4cf4e27165ec8bf75eab999c4c4f5b9cd6995ff253a (image=quay.io/ceph/ceph:v18, name=optimistic_colden, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 18:02:53 compute-0 podman[193891]: 2025-12-03 18:02:53.725280709 +0000 UTC m=+0.167008856 container attach 99e070bb469c91f63349a4cf4e27165ec8bf75eab999c4c4f5b9cd6995ff253a (image=quay.io/ceph/ceph:v18, name=optimistic_colden, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:02:54 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:54.038+0000 7ff72018d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 18:02:54 compute-0 ceph-mgr[193091]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 18:02:54 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'balancer'
Dec  3 18:02:54 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:54.310+0000 7ff72018d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 18:02:54 compute-0 ceph-mgr[193091]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 18:02:54 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'cephadm'
Dec  3 18:02:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  3 18:02:54 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228946892' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  3 18:02:54 compute-0 optimistic_colden[193922]: {
Dec  3 18:02:54 compute-0 optimistic_colden[193922]:    "epoch": 5,
Dec  3 18:02:54 compute-0 optimistic_colden[193922]:    "available": true,
Dec  3 18:02:54 compute-0 optimistic_colden[193922]:    "active_name": "compute-0.etccde",
Dec  3 18:02:54 compute-0 optimistic_colden[193922]:    "num_standby": 0
Dec  3 18:02:54 compute-0 optimistic_colden[193922]: }
Dec  3 18:02:54 compute-0 systemd[1]: libpod-99e070bb469c91f63349a4cf4e27165ec8bf75eab999c4c4f5b9cd6995ff253a.scope: Deactivated successfully.
Dec  3 18:02:54 compute-0 conmon[193922]: conmon 99e070bb469c91f63349 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-99e070bb469c91f63349a4cf4e27165ec8bf75eab999c4c4f5b9cd6995ff253a.scope/container/memory.events
Dec  3 18:02:54 compute-0 podman[193891]: 2025-12-03 18:02:54.370584711 +0000 UTC m=+0.812312888 container died 99e070bb469c91f63349a4cf4e27165ec8bf75eab999c4c4f5b9cd6995ff253a (image=quay.io/ceph/ceph:v18, name=optimistic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:02:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-914d991ce59f6bc327f10655dab32e837285e0b5f44fe7550adcee6818c04191-merged.mount: Deactivated successfully.
Dec  3 18:02:54 compute-0 podman[193891]: 2025-12-03 18:02:54.440473401 +0000 UTC m=+0.882201508 container remove 99e070bb469c91f63349a4cf4e27165ec8bf75eab999c4c4f5b9cd6995ff253a (image=quay.io/ceph/ceph:v18, name=optimistic_colden, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:02:54 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3162284787' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  3 18:02:54 compute-0 systemd[1]: libpod-conmon-99e070bb469c91f63349a4cf4e27165ec8bf75eab999c4c4f5b9cd6995ff253a.scope: Deactivated successfully.
Dec  3 18:02:54 compute-0 podman[193958]: 2025-12-03 18:02:54.542216217 +0000 UTC m=+0.070855105 container create 281ebe6ad5ab5a6ce4730eb1398548e4df8b4399eb3225942244d8c0ae560136 (image=quay.io/ceph/ceph:v18, name=nice_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:02:54 compute-0 systemd[1]: Started libpod-conmon-281ebe6ad5ab5a6ce4730eb1398548e4df8b4399eb3225942244d8c0ae560136.scope.
Dec  3 18:02:54 compute-0 podman[193958]: 2025-12-03 18:02:54.510421112 +0000 UTC m=+0.039059980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:02:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bbbac3b7db2bc19b47d33993e417542ff11819410c074cfc45c206e76e6731/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bbbac3b7db2bc19b47d33993e417542ff11819410c074cfc45c206e76e6731/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bbbac3b7db2bc19b47d33993e417542ff11819410c074cfc45c206e76e6731/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:02:54 compute-0 podman[193958]: 2025-12-03 18:02:54.668701707 +0000 UTC m=+0.197340655 container init 281ebe6ad5ab5a6ce4730eb1398548e4df8b4399eb3225942244d8c0ae560136 (image=quay.io/ceph/ceph:v18, name=nice_moser, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 18:02:54 compute-0 podman[193958]: 2025-12-03 18:02:54.688230576 +0000 UTC m=+0.216869444 container start 281ebe6ad5ab5a6ce4730eb1398548e4df8b4399eb3225942244d8c0ae560136 (image=quay.io/ceph/ceph:v18, name=nice_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:02:54 compute-0 podman[193958]: 2025-12-03 18:02:54.694095477 +0000 UTC m=+0.222734375 container attach 281ebe6ad5ab5a6ce4730eb1398548e4df8b4399eb3225942244d8c0ae560136 (image=quay.io/ceph/ceph:v18, name=nice_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:02:56 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'crash'
Dec  3 18:02:56 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:56.689+0000 7ff72018d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  3 18:02:56 compute-0 ceph-mgr[193091]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  3 18:02:56 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'dashboard'
Dec  3 18:02:58 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'devicehealth'
Dec  3 18:02:58 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:58.339+0000 7ff72018d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  3 18:02:58 compute-0 ceph-mgr[193091]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  3 18:02:58 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'diskprediction_local'
Dec  3 18:02:58 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  3 18:02:58 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  3 18:02:58 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]:  from numpy import show_config as show_numpy_config
Dec  3 18:02:58 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:58.881+0000 7ff72018d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  3 18:02:58 compute-0 ceph-mgr[193091]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  3 18:02:58 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'influx'
Dec  3 18:02:59 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:59.137+0000 7ff72018d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  3 18:02:59 compute-0 ceph-mgr[193091]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  3 18:02:59 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'insights'
Dec  3 18:02:59 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'iostat'
Dec  3 18:02:59 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:02:59.680+0000 7ff72018d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  3 18:02:59 compute-0 ceph-mgr[193091]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  3 18:02:59 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'k8sevents'
Dec  3 18:02:59 compute-0 podman[158200]: time="2025-12-03T18:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:02:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23480 "" "Go-http-client/1.1"
Dec  3 18:02:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Dec  3 18:03:01 compute-0 openstack_network_exporter[160319]: ERROR   18:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:03:01 compute-0 openstack_network_exporter[160319]: ERROR   18:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:03:01 compute-0 openstack_network_exporter[160319]: ERROR   18:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:03:01 compute-0 openstack_network_exporter[160319]: ERROR   18:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:03:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:03:01 compute-0 openstack_network_exporter[160319]: ERROR   18:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:03:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:03:01 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'localpool'
Dec  3 18:03:01 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'mds_autoscaler'
Dec  3 18:03:01 compute-0 podman[194010]: 2025-12-03 18:03:01.961637035 +0000 UTC m=+0.114737439 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:03:02 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'mirroring'
Dec  3 18:03:02 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'nfs'
Dec  3 18:03:03 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:03.447+0000 7ff72018d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  3 18:03:03 compute-0 ceph-mgr[193091]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  3 18:03:03 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'orchestrator'
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.697 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.698 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f567b8800>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.719 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:03:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:03:04 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:04.144+0000 7ff72018d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  3 18:03:04 compute-0 ceph-mgr[193091]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  3 18:03:04 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'osd_perf_query'
Dec  3 18:03:04 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:04.413+0000 7ff72018d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  3 18:03:04 compute-0 ceph-mgr[193091]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  3 18:03:04 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'osd_support'
Dec  3 18:03:04 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:04.663+0000 7ff72018d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  3 18:03:04 compute-0 ceph-mgr[193091]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  3 18:03:04 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'pg_autoscaler'
Dec  3 18:03:04 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:04.933+0000 7ff72018d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  3 18:03:04 compute-0 ceph-mgr[193091]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  3 18:03:04 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'progress'
Dec  3 18:03:05 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:05.189+0000 7ff72018d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  3 18:03:05 compute-0 ceph-mgr[193091]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  3 18:03:05 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'prometheus'
Dec  3 18:03:06 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:06.244+0000 7ff72018d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  3 18:03:06 compute-0 ceph-mgr[193091]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  3 18:03:06 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'rbd_support'
Dec  3 18:03:06 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:06.552+0000 7ff72018d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  3 18:03:06 compute-0 ceph-mgr[193091]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  3 18:03:06 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'restful'
Dec  3 18:03:07 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'rgw'
Dec  3 18:03:07 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:07.912+0000 7ff72018d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  3 18:03:07 compute-0 ceph-mgr[193091]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  3 18:03:07 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'rook'
Dec  3 18:03:10 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:10.088+0000 7ff72018d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  3 18:03:10 compute-0 ceph-mgr[193091]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  3 18:03:10 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'selftest'
Dec  3 18:03:10 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:10.328+0000 7ff72018d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  3 18:03:10 compute-0 ceph-mgr[193091]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  3 18:03:10 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'snap_schedule'
Dec  3 18:03:10 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:10.598+0000 7ff72018d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  3 18:03:10 compute-0 ceph-mgr[193091]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  3 18:03:10 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'stats'
Dec  3 18:03:10 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'status'
Dec  3 18:03:11 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:11.116+0000 7ff72018d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  3 18:03:11 compute-0 ceph-mgr[193091]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  3 18:03:11 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'telegraf'
Dec  3 18:03:11 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:11.382+0000 7ff72018d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  3 18:03:11 compute-0 ceph-mgr[193091]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  3 18:03:11 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'telemetry'
Dec  3 18:03:12 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:12.058+0000 7ff72018d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  3 18:03:12 compute-0 ceph-mgr[193091]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  3 18:03:12 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'test_orchestrator'
Dec  3 18:03:12 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:12.761+0000 7ff72018d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  3 18:03:12 compute-0 ceph-mgr[193091]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  3 18:03:12 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'volumes'
Dec  3 18:03:13 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:13.443+0000 7ff72018d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr[py] Loading python module 'zabbix'
Dec  3 18:03:13 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T18:03:13.703+0000 7ff72018d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: ms_deliver_dispatch: unhandled message 0x55589f4d31e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Active manager daemon compute-0.etccde restarted
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.etccde
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr handle_mgr_map Activating!
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr handle_mgr_map I am now activating
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.etccde(active, starting, since 0.0376875s)
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.etccde", "id": "compute-0.etccde"} v 0) v1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mgr metadata", "who": "compute-0.etccde", "id": "compute-0.etccde"}]: dispatch
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e1 all = 1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: balancer
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Starting
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:03:13
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [balancer INFO root] No pools available
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Manager daemon compute-0.etccde is now available
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mon[192802]: Active manager daemon compute-0.etccde restarted
Dec  3 18:03:13 compute-0 ceph-mon[192802]: Activating manager daemon compute-0.etccde
Dec  3 18:03:13 compute-0 ceph-mon[192802]: Manager daemon compute-0.etccde is now available
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: cephadm
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: crash
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: devicehealth
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [devicehealth INFO root] Starting
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: iostat
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: nfs
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: orchestrator
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: pg_autoscaler
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: progress
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [progress INFO root] Loading...
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [progress INFO root] No stored events to load
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [progress INFO root] Loaded [] historic events
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [progress INFO root] Loaded OSDMap, ready.
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] recovery thread starting
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] starting setup
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: rbd_support
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: restful
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [restful INFO root] server_addr: :: server_port: 8003
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/mirror_snapshot_schedule"} v 0) v1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/mirror_snapshot_schedule"}]: dispatch
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [restful WARNING root] server not running: no certificate configured
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: status
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: telemetry
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] PerfHandler: starting
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TaskHandler: starting
Dec  3 18:03:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/trash_purge_schedule"} v 0) v1
Dec  3 18:03:13 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/trash_purge_schedule"}]: dispatch
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] setup complete
Dec  3 18:03:13 compute-0 ceph-mgr[193091]: mgr load Constructed class from module: volumes
Dec  3 18:03:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Dec  3 18:03:14 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Dec  3 18:03:14 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:14 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.etccde(active, since 1.04817s)
Dec  3 18:03:14 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec  3 18:03:14 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec  3 18:03:14 compute-0 nice_moser[193974]: {
Dec  3 18:03:14 compute-0 nice_moser[193974]:    "mgrmap_epoch": 7,
Dec  3 18:03:14 compute-0 nice_moser[193974]:    "initialized": true
Dec  3 18:03:14 compute-0 nice_moser[193974]: }
Dec  3 18:03:14 compute-0 systemd[1]: libpod-281ebe6ad5ab5a6ce4730eb1398548e4df8b4399eb3225942244d8c0ae560136.scope: Deactivated successfully.
Dec  3 18:03:14 compute-0 podman[193958]: 2025-12-03 18:03:14.821244492 +0000 UTC m=+20.349883350 container died 281ebe6ad5ab5a6ce4730eb1398548e4df8b4399eb3225942244d8c0ae560136 (image=quay.io/ceph/ceph:v18, name=nice_moser, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:14 compute-0 ceph-mon[192802]: Found migration_current of "None". Setting to last migration.
Dec  3 18:03:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/mirror_snapshot_schedule"}]: dispatch
Dec  3 18:03:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.etccde/trash_purge_schedule"}]: dispatch
Dec  3 18:03:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-64bbbac3b7db2bc19b47d33993e417542ff11819410c074cfc45c206e76e6731-merged.mount: Deactivated successfully.
Dec  3 18:03:14 compute-0 podman[193958]: 2025-12-03 18:03:14.888797247 +0000 UTC m=+20.417436125 container remove 281ebe6ad5ab5a6ce4730eb1398548e4df8b4399eb3225942244d8c0ae560136 (image=quay.io/ceph/ceph:v18, name=nice_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 18:03:14 compute-0 systemd[1]: libpod-conmon-281ebe6ad5ab5a6ce4730eb1398548e4df8b4399eb3225942244d8c0ae560136.scope: Deactivated successfully.
Dec  3 18:03:14 compute-0 podman[194158]: 2025-12-03 18:03:14.999402396 +0000 UTC m=+0.078912279 container create 507ff354a4ff084f9878c9b17a7f2b69720ceee71d26ec33872c47cd1733bb35 (image=quay.io/ceph/ceph:v18, name=beautiful_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:15 compute-0 podman[194158]: 2025-12-03 18:03:14.962737113 +0000 UTC m=+0.042246986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:15 compute-0 systemd[1]: Started libpod-conmon-507ff354a4ff084f9878c9b17a7f2b69720ceee71d26ec33872c47cd1733bb35.scope.
Dec  3 18:03:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0071c0532e2fb71289ef15e848753a21b2586a0a9196eb749cf9dd81243b193/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0071c0532e2fb71289ef15e848753a21b2586a0a9196eb749cf9dd81243b193/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0071c0532e2fb71289ef15e848753a21b2586a0a9196eb749cf9dd81243b193/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:15 compute-0 podman[194158]: 2025-12-03 18:03:15.178226513 +0000 UTC m=+0.257736406 container init 507ff354a4ff084f9878c9b17a7f2b69720ceee71d26ec33872c47cd1733bb35 (image=quay.io/ceph/ceph:v18, name=beautiful_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:03:15 compute-0 podman[194158]: 2025-12-03 18:03:15.197736772 +0000 UTC m=+0.277246645 container start 507ff354a4ff084f9878c9b17a7f2b69720ceee71d26ec33872c47cd1733bb35 (image=quay.io/ceph/ceph:v18, name=beautiful_haibt, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:15 compute-0 podman[194158]: 2025-12-03 18:03:15.207348744 +0000 UTC m=+0.286858617 container attach 507ff354a4ff084f9878c9b17a7f2b69720ceee71d26ec33872c47cd1733bb35 (image=quay.io/ceph/ceph:v18, name=beautiful_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Dec  3 18:03:15 compute-0 ceph-mgr[193091]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 18:03:15 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Dec  3 18:03:15 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 18:03:15 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 18:03:15 compute-0 podman[194158]: 2025-12-03 18:03:15.826162649 +0000 UTC m=+0.905672562 container died 507ff354a4ff084f9878c9b17a7f2b69720ceee71d26ec33872c47cd1733bb35 (image=quay.io/ceph/ceph:v18, name=beautiful_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:03:15 compute-0 systemd[1]: libpod-507ff354a4ff084f9878c9b17a7f2b69720ceee71d26ec33872c47cd1733bb35.scope: Deactivated successfully.
Dec  3 18:03:15 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0071c0532e2fb71289ef15e848753a21b2586a0a9196eb749cf9dd81243b193-merged.mount: Deactivated successfully.
Dec  3 18:03:15 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.etccde(active, since 2s)
Dec  3 18:03:15 compute-0 podman[194158]: 2025-12-03 18:03:15.891813107 +0000 UTC m=+0.971322980 container remove 507ff354a4ff084f9878c9b17a7f2b69720ceee71d26ec33872c47cd1733bb35 (image=quay.io/ceph/ceph:v18, name=beautiful_haibt, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:15 compute-0 systemd[1]: libpod-conmon-507ff354a4ff084f9878c9b17a7f2b69720ceee71d26ec33872c47cd1733bb35.scope: Deactivated successfully.
Dec  3 18:03:15 compute-0 podman[194212]: 2025-12-03 18:03:15.963677114 +0000 UTC m=+0.047576395 container create 469656e5bb9345658ee116d3970e6a4db7a68012bb5e84cad3a143714c11d908 (image=quay.io/ceph/ceph:v18, name=gallant_ritchie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:03:16 compute-0 systemd[1]: Started libpod-conmon-469656e5bb9345658ee116d3970e6a4db7a68012bb5e84cad3a143714c11d908.scope.
Dec  3 18:03:16 compute-0 podman[194212]: 2025-12-03 18:03:15.943979971 +0000 UTC m=+0.027879252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5dc53a37b9babe46bce95cba826c359fa13b83386a26ac057c5c8973e835d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5dc53a37b9babe46bce95cba826c359fa13b83386a26ac057c5c8973e835d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5dc53a37b9babe46bce95cba826c359fa13b83386a26ac057c5c8973e835d7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:16 compute-0 podman[194212]: 2025-12-03 18:03:16.079777465 +0000 UTC m=+0.163676776 container init 469656e5bb9345658ee116d3970e6a4db7a68012bb5e84cad3a143714c11d908 (image=quay.io/ceph/ceph:v18, name=gallant_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:16 compute-0 podman[194212]: 2025-12-03 18:03:16.088348261 +0000 UTC m=+0.172247592 container start 469656e5bb9345658ee116d3970e6a4db7a68012bb5e84cad3a143714c11d908 (image=quay.io/ceph/ceph:v18, name=gallant_ritchie, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:16 compute-0 ceph-mgr[193091]: [cephadm INFO cherrypy.error] [03/Dec/2025:18:03:16] ENGINE Bus STARTING
Dec  3 18:03:16 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : [03/Dec/2025:18:03:16] ENGINE Bus STARTING
Dec  3 18:03:16 compute-0 podman[194212]: 2025-12-03 18:03:16.095030321 +0000 UTC m=+0.178929622 container attach 469656e5bb9345658ee116d3970e6a4db7a68012bb5e84cad3a143714c11d908 (image=quay.io/ceph/ceph:v18, name=gallant_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:16 compute-0 ceph-mgr[193091]: [cephadm INFO cherrypy.error] [03/Dec/2025:18:03:16] ENGINE Serving on https://192.168.122.100:7150
Dec  3 18:03:16 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : [03/Dec/2025:18:03:16] ENGINE Serving on https://192.168.122.100:7150
Dec  3 18:03:16 compute-0 ceph-mgr[193091]: [cephadm INFO cherrypy.error] [03/Dec/2025:18:03:16] ENGINE Client ('192.168.122.100', 47006) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  3 18:03:16 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : [03/Dec/2025:18:03:16] ENGINE Client ('192.168.122.100', 47006) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  3 18:03:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019920654 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:03:16 compute-0 ceph-mgr[193091]: [cephadm INFO cherrypy.error] [03/Dec/2025:18:03:16] ENGINE Serving on http://192.168.122.100:8765
Dec  3 18:03:16 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : [03/Dec/2025:18:03:16] ENGINE Serving on http://192.168.122.100:8765
Dec  3 18:03:16 compute-0 ceph-mgr[193091]: [cephadm INFO cherrypy.error] [03/Dec/2025:18:03:16] ENGINE Bus STARTED
Dec  3 18:03:16 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : [03/Dec/2025:18:03:16] ENGINE Bus STARTED
Dec  3 18:03:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 18:03:16 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 18:03:16 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Dec  3 18:03:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:17 compute-0 ceph-mgr[193091]: [cephadm INFO root] Set ssh ssh_user
Dec  3 18:03:17 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec  3 18:03:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Dec  3 18:03:17 compute-0 ceph-mon[192802]: [03/Dec/2025:18:03:16] ENGINE Bus STARTING
Dec  3 18:03:17 compute-0 ceph-mon[192802]: [03/Dec/2025:18:03:16] ENGINE Serving on https://192.168.122.100:7150
Dec  3 18:03:17 compute-0 ceph-mon[192802]: [03/Dec/2025:18:03:16] ENGINE Client ('192.168.122.100', 47006) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  3 18:03:17 compute-0 ceph-mon[192802]: [03/Dec/2025:18:03:16] ENGINE Serving on http://192.168.122.100:8765
Dec  3 18:03:17 compute-0 ceph-mon[192802]: [03/Dec/2025:18:03:16] ENGINE Bus STARTED
Dec  3 18:03:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:17 compute-0 ceph-mgr[193091]: [cephadm INFO root] Set ssh ssh_config
Dec  3 18:03:17 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec  3 18:03:17 compute-0 ceph-mgr[193091]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec  3 18:03:17 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec  3 18:03:17 compute-0 gallant_ritchie[194228]: ssh user set to ceph-admin. sudo will be used
Dec  3 18:03:17 compute-0 systemd[1]: libpod-469656e5bb9345658ee116d3970e6a4db7a68012bb5e84cad3a143714c11d908.scope: Deactivated successfully.
Dec  3 18:03:17 compute-0 podman[194212]: 2025-12-03 18:03:17.398237989 +0000 UTC m=+1.482137300 container died 469656e5bb9345658ee116d3970e6a4db7a68012bb5e84cad3a143714c11d908 (image=quay.io/ceph/ceph:v18, name=gallant_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:03:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e5dc53a37b9babe46bce95cba826c359fa13b83386a26ac057c5c8973e835d7-merged.mount: Deactivated successfully.
Dec  3 18:03:17 compute-0 podman[194212]: 2025-12-03 18:03:17.563306577 +0000 UTC m=+1.647205868 container remove 469656e5bb9345658ee116d3970e6a4db7a68012bb5e84cad3a143714c11d908 (image=quay.io/ceph/ceph:v18, name=gallant_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:17 compute-0 systemd[1]: libpod-conmon-469656e5bb9345658ee116d3970e6a4db7a68012bb5e84cad3a143714c11d908.scope: Deactivated successfully.
Dec  3 18:03:17 compute-0 podman[194279]: 2025-12-03 18:03:17.635251186 +0000 UTC m=+0.195255434 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Dec  3 18:03:17 compute-0 podman[194278]: 2025-12-03 18:03:17.640101703 +0000 UTC m=+0.211242769 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:03:17 compute-0 podman[194286]: 2025-12-03 18:03:17.652006259 +0000 UTC m=+0.190712235 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  3 18:03:17 compute-0 podman[194336]: 2025-12-03 18:03:17.658674379 +0000 UTC m=+0.070145497 container create 51e2ac2cbd13c50006705665290d7b4adb9fb944add48e82af34b0122cfb654d (image=quay.io/ceph/ceph:v18, name=adoring_golick, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:17 compute-0 podman[194291]: 2025-12-03 18:03:17.665695648 +0000 UTC m=+0.210613974 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, container_name=kepler, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git)
Dec  3 18:03:17 compute-0 systemd[1]: Started libpod-conmon-51e2ac2cbd13c50006705665290d7b4adb9fb944add48e82af34b0122cfb654d.scope.
Dec  3 18:03:17 compute-0 podman[194285]: 2025-12-03 18:03:17.721616082 +0000 UTC m=+0.276175179 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:03:17 compute-0 podman[194336]: 2025-12-03 18:03:17.631534736 +0000 UTC m=+0.043005874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202a60311e21891b729b55dcc81f51cd02526d92e4360ebd04e12f5d2eb3ce83/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202a60311e21891b729b55dcc81f51cd02526d92e4360ebd04e12f5d2eb3ce83/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202a60311e21891b729b55dcc81f51cd02526d92e4360ebd04e12f5d2eb3ce83/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202a60311e21891b729b55dcc81f51cd02526d92e4360ebd04e12f5d2eb3ce83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202a60311e21891b729b55dcc81f51cd02526d92e4360ebd04e12f5d2eb3ce83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:17 compute-0 ceph-mgr[193091]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 18:03:17 compute-0 podman[194336]: 2025-12-03 18:03:17.803152172 +0000 UTC m=+0.214623310 container init 51e2ac2cbd13c50006705665290d7b4adb9fb944add48e82af34b0122cfb654d (image=quay.io/ceph/ceph:v18, name=adoring_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:03:17 compute-0 podman[194336]: 2025-12-03 18:03:17.819648859 +0000 UTC m=+0.231119977 container start 51e2ac2cbd13c50006705665290d7b4adb9fb944add48e82af34b0122cfb654d (image=quay.io/ceph/ceph:v18, name=adoring_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:17 compute-0 podman[194336]: 2025-12-03 18:03:17.825073649 +0000 UTC m=+0.236544767 container attach 51e2ac2cbd13c50006705665290d7b4adb9fb944add48e82af34b0122cfb654d (image=quay.io/ceph/ceph:v18, name=adoring_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:18 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:18 compute-0 ceph-mon[192802]: Set ssh ssh_user
Dec  3 18:03:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:18 compute-0 ceph-mon[192802]: Set ssh ssh_config
Dec  3 18:03:18 compute-0 ceph-mon[192802]: ssh user set to ceph-admin. sudo will be used
Dec  3 18:03:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Dec  3 18:03:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:18 compute-0 ceph-mgr[193091]: [cephadm INFO root] Set ssh ssh_identity_key
Dec  3 18:03:18 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec  3 18:03:18 compute-0 ceph-mgr[193091]: [cephadm INFO root] Set ssh private key
Dec  3 18:03:18 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Set ssh private key
Dec  3 18:03:18 compute-0 systemd[1]: libpod-51e2ac2cbd13c50006705665290d7b4adb9fb944add48e82af34b0122cfb654d.scope: Deactivated successfully.
Dec  3 18:03:18 compute-0 podman[194336]: 2025-12-03 18:03:18.447404639 +0000 UTC m=+0.858875787 container died 51e2ac2cbd13c50006705665290d7b4adb9fb944add48e82af34b0122cfb654d (image=quay.io/ceph/ceph:v18, name=adoring_golick, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:03:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-202a60311e21891b729b55dcc81f51cd02526d92e4360ebd04e12f5d2eb3ce83-merged.mount: Deactivated successfully.
Dec  3 18:03:18 compute-0 podman[194336]: 2025-12-03 18:03:18.524495502 +0000 UTC m=+0.935966630 container remove 51e2ac2cbd13c50006705665290d7b4adb9fb944add48e82af34b0122cfb654d (image=quay.io/ceph/ceph:v18, name=adoring_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:03:18 compute-0 systemd[1]: libpod-conmon-51e2ac2cbd13c50006705665290d7b4adb9fb944add48e82af34b0122cfb654d.scope: Deactivated successfully.
Dec  3 18:03:18 compute-0 podman[194427]: 2025-12-03 18:03:18.577766582 +0000 UTC m=+0.103561330 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:03:18 compute-0 podman[194449]: 2025-12-03 18:03:18.615989962 +0000 UTC m=+0.057762941 container create 27f8a7257a4e5489c76ed6bb2367f0f7475282160ac1810112fb6cd0ccb518ba (image=quay.io/ceph/ceph:v18, name=elegant_mahavira, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:03:18 compute-0 systemd[1]: Started libpod-conmon-27f8a7257a4e5489c76ed6bb2367f0f7475282160ac1810112fb6cd0ccb518ba.scope.
Dec  3 18:03:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:18 compute-0 podman[194449]: 2025-12-03 18:03:18.595339165 +0000 UTC m=+0.037112124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6617a81a151113a09aebf9ee54a29bf451511675f89ea1dac671b65ec3e1bf1d/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6617a81a151113a09aebf9ee54a29bf451511675f89ea1dac671b65ec3e1bf1d/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6617a81a151113a09aebf9ee54a29bf451511675f89ea1dac671b65ec3e1bf1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6617a81a151113a09aebf9ee54a29bf451511675f89ea1dac671b65ec3e1bf1d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6617a81a151113a09aebf9ee54a29bf451511675f89ea1dac671b65ec3e1bf1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:18 compute-0 podman[194449]: 2025-12-03 18:03:18.741221501 +0000 UTC m=+0.182994490 container init 27f8a7257a4e5489c76ed6bb2367f0f7475282160ac1810112fb6cd0ccb518ba (image=quay.io/ceph/ceph:v18, name=elegant_mahavira, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 18:03:18 compute-0 podman[194449]: 2025-12-03 18:03:18.762239167 +0000 UTC m=+0.204012136 container start 27f8a7257a4e5489c76ed6bb2367f0f7475282160ac1810112fb6cd0ccb518ba (image=quay.io/ceph/ceph:v18, name=elegant_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 18:03:18 compute-0 podman[194449]: 2025-12-03 18:03:18.76777523 +0000 UTC m=+0.209548249 container attach 27f8a7257a4e5489c76ed6bb2367f0f7475282160ac1810112fb6cd0ccb518ba (image=quay.io/ceph/ceph:v18, name=elegant_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:19 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Dec  3 18:03:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:19 compute-0 ceph-mgr[193091]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec  3 18:03:19 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec  3 18:03:19 compute-0 systemd[1]: libpod-27f8a7257a4e5489c76ed6bb2367f0f7475282160ac1810112fb6cd0ccb518ba.scope: Deactivated successfully.
Dec  3 18:03:19 compute-0 podman[194504]: 2025-12-03 18:03:19.365694793 +0000 UTC m=+0.043425565 container died 27f8a7257a4e5489c76ed6bb2367f0f7475282160ac1810112fb6cd0ccb518ba (image=quay.io/ceph/ceph:v18, name=elegant_mahavira, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec  3 18:03:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6617a81a151113a09aebf9ee54a29bf451511675f89ea1dac671b65ec3e1bf1d-merged.mount: Deactivated successfully.
Dec  3 18:03:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:19 compute-0 ceph-mon[192802]: Set ssh ssh_identity_key
Dec  3 18:03:19 compute-0 ceph-mon[192802]: Set ssh private key
Dec  3 18:03:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:19 compute-0 podman[194504]: 2025-12-03 18:03:19.433104213 +0000 UTC m=+0.110834915 container remove 27f8a7257a4e5489c76ed6bb2367f0f7475282160ac1810112fb6cd0ccb518ba (image=quay.io/ceph/ceph:v18, name=elegant_mahavira, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 18:03:19 compute-0 systemd[1]: libpod-conmon-27f8a7257a4e5489c76ed6bb2367f0f7475282160ac1810112fb6cd0ccb518ba.scope: Deactivated successfully.
Dec  3 18:03:19 compute-0 podman[194518]: 2025-12-03 18:03:19.559149543 +0000 UTC m=+0.080172208 container create ccd39ff9d322415f8b04045ad5b626755b1238f997455b083d782817704da196 (image=quay.io/ceph/ceph:v18, name=silly_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:19 compute-0 podman[194518]: 2025-12-03 18:03:19.525958845 +0000 UTC m=+0.046981600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:19 compute-0 systemd[1]: Started libpod-conmon-ccd39ff9d322415f8b04045ad5b626755b1238f997455b083d782817704da196.scope.
Dec  3 18:03:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f81c62d66a41d4fe943dc37b2c57b67a72c5f262a382f469cc93115e21c6959/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f81c62d66a41d4fe943dc37b2c57b67a72c5f262a382f469cc93115e21c6959/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f81c62d66a41d4fe943dc37b2c57b67a72c5f262a382f469cc93115e21c6959/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:19 compute-0 podman[194518]: 2025-12-03 18:03:19.701051474 +0000 UTC m=+0.222074159 container init ccd39ff9d322415f8b04045ad5b626755b1238f997455b083d782817704da196 (image=quay.io/ceph/ceph:v18, name=silly_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:03:19 compute-0 podman[194518]: 2025-12-03 18:03:19.715714196 +0000 UTC m=+0.236736861 container start ccd39ff9d322415f8b04045ad5b626755b1238f997455b083d782817704da196 (image=quay.io/ceph/ceph:v18, name=silly_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:03:19 compute-0 podman[194518]: 2025-12-03 18:03:19.722760865 +0000 UTC m=+0.243783550 container attach ccd39ff9d322415f8b04045ad5b626755b1238f997455b083d782817704da196 (image=quay.io/ceph/ceph:v18, name=silly_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:03:19 compute-0 ceph-mgr[193091]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 18:03:20 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:20 compute-0 silly_poitras[194536]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQ+KYs7bXDQUkWiYrVdsGqMUtLHu1kWIRFAq30YgVxBIRqvcQkQt622QgLx7uDO6c+xuE10e1zSqq8BY7w92UoK/84nLjBGO2bSZu57N22tYikQKbX4NayPvjkYuiLlPpZRmIdY+VJfGbC+bahC7j4eaHMBNHMcCt1uNI7xC6LBA5aOSa/Wh52rZnEHOrr2tOI6Sbvwq8z/kifl9uwGUdq4lixZLpK9Xt1gXt724zxLDg90lqGP/R1lQoAxJ3C3r1BsBXcdePL+nuHrspmceBbetBmA5NR2BAijCSM17iAfd4+NffJz0lwQn/rfdZP3AHE/myahpY7OzkXvP1HJw++GuLTvo/p6NyjqiYipauGIu/bt8qg553BCQgA5SMppydsBjREyiFKreTxmdR62WZUp68i3djKSFJM8i5k3aodiHeKkzUJfM1+nbNgKYLzABdJS1MgbIZrDPFdmLp1bWKt6E/PpCV2KmeDPCyo1Zc/q/HGMiUzxigaB0EjTU9pcl8= zuul@controller
Dec  3 18:03:20 compute-0 systemd[1]: libpod-ccd39ff9d322415f8b04045ad5b626755b1238f997455b083d782817704da196.scope: Deactivated successfully.
Dec  3 18:03:20 compute-0 podman[194518]: 2025-12-03 18:03:20.285764769 +0000 UTC m=+0.806787504 container died ccd39ff9d322415f8b04045ad5b626755b1238f997455b083d782817704da196 (image=quay.io/ceph/ceph:v18, name=silly_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f81c62d66a41d4fe943dc37b2c57b67a72c5f262a382f469cc93115e21c6959-merged.mount: Deactivated successfully.
Dec  3 18:03:20 compute-0 podman[194518]: 2025-12-03 18:03:20.352003641 +0000 UTC m=+0.873026336 container remove ccd39ff9d322415f8b04045ad5b626755b1238f997455b083d782817704da196 (image=quay.io/ceph/ceph:v18, name=silly_poitras, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:20 compute-0 systemd[1]: libpod-conmon-ccd39ff9d322415f8b04045ad5b626755b1238f997455b083d782817704da196.scope: Deactivated successfully.
Dec  3 18:03:20 compute-0 ceph-mon[192802]: Set ssh ssh_identity_pub
Dec  3 18:03:20 compute-0 podman[194573]: 2025-12-03 18:03:20.43602044 +0000 UTC m=+0.063761673 container create 36909a2cd843afed52e8a6d3571ca4adbf629e2bb26b031ff83a3d95d4deda3c (image=quay.io/ceph/ceph:v18, name=brave_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:03:20 compute-0 podman[194573]: 2025-12-03 18:03:20.40270337 +0000 UTC m=+0.030444693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:20 compute-0 systemd[1]: Started libpod-conmon-36909a2cd843afed52e8a6d3571ca4adbf629e2bb26b031ff83a3d95d4deda3c.scope.
Dec  3 18:03:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b178969104db8be40ed7bcdf9744c70e2dfe54d175d6c1be058416b6e93ea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b178969104db8be40ed7bcdf9744c70e2dfe54d175d6c1be058416b6e93ea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b178969104db8be40ed7bcdf9744c70e2dfe54d175d6c1be058416b6e93ea2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:20 compute-0 podman[194573]: 2025-12-03 18:03:20.591644022 +0000 UTC m=+0.219385345 container init 36909a2cd843afed52e8a6d3571ca4adbf629e2bb26b031ff83a3d95d4deda3c (image=quay.io/ceph/ceph:v18, name=brave_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:03:20 compute-0 podman[194573]: 2025-12-03 18:03:20.614227185 +0000 UTC m=+0.241968428 container start 36909a2cd843afed52e8a6d3571ca4adbf629e2bb26b031ff83a3d95d4deda3c (image=quay.io/ceph/ceph:v18, name=brave_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:03:20 compute-0 podman[194573]: 2025-12-03 18:03:20.619764748 +0000 UTC m=+0.247506001 container attach 36909a2cd843afed52e8a6d3571ca4adbf629e2bb26b031ff83a3d95d4deda3c (image=quay.io/ceph/ceph:v18, name=brave_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:03:21 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052994 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:03:21 compute-0 systemd-logind[784]: New session 28 of user ceph-admin.
Dec  3 18:03:21 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec  3 18:03:21 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  3 18:03:21 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  3 18:03:21 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec  3 18:03:21 compute-0 systemd-logind[784]: New session 30 of user ceph-admin.
Dec  3 18:03:21 compute-0 systemd[194616]: Queued start job for default target Main User Target.
Dec  3 18:03:21 compute-0 systemd[194616]: Created slice User Application Slice.
Dec  3 18:03:21 compute-0 systemd[194616]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  3 18:03:21 compute-0 systemd[194616]: Started Daily Cleanup of User's Temporary Directories.
Dec  3 18:03:21 compute-0 systemd[194616]: Reached target Paths.
Dec  3 18:03:21 compute-0 systemd[194616]: Reached target Timers.
Dec  3 18:03:21 compute-0 systemd[194616]: Starting D-Bus User Message Bus Socket...
Dec  3 18:03:21 compute-0 systemd[194616]: Starting Create User's Volatile Files and Directories...
Dec  3 18:03:21 compute-0 systemd[194616]: Finished Create User's Volatile Files and Directories.
Dec  3 18:03:21 compute-0 ceph-mgr[193091]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 18:03:21 compute-0 systemd[194616]: Listening on D-Bus User Message Bus Socket.
Dec  3 18:03:21 compute-0 systemd[194616]: Reached target Sockets.
Dec  3 18:03:21 compute-0 systemd[194616]: Reached target Basic System.
Dec  3 18:03:21 compute-0 systemd[194616]: Reached target Main User Target.
Dec  3 18:03:21 compute-0 systemd[194616]: Startup finished in 206ms.
Dec  3 18:03:21 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec  3 18:03:21 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec  3 18:03:21 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec  3 18:03:22 compute-0 systemd-logind[784]: New session 31 of user ceph-admin.
Dec  3 18:03:22 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec  3 18:03:23 compute-0 systemd-logind[784]: New session 32 of user ceph-admin.
Dec  3 18:03:23 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec  3 18:03:23 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec  3 18:03:23 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec  3 18:03:23 compute-0 systemd-logind[784]: New session 33 of user ceph-admin.
Dec  3 18:03:23 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec  3 18:03:23 compute-0 ceph-mgr[193091]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 18:03:23 compute-0 ceph-mon[192802]: Deploying cephadm binary to compute-0
Dec  3 18:03:24 compute-0 systemd-logind[784]: New session 34 of user ceph-admin.
Dec  3 18:03:24 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Dec  3 18:03:24 compute-0 systemd-logind[784]: New session 35 of user ceph-admin.
Dec  3 18:03:24 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Dec  3 18:03:25 compute-0 systemd-logind[784]: New session 36 of user ceph-admin.
Dec  3 18:03:25 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Dec  3 18:03:25 compute-0 systemd-logind[784]: New session 37 of user ceph-admin.
Dec  3 18:03:25 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Dec  3 18:03:25 compute-0 ceph-mgr[193091]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 18:03:26 compute-0 systemd-logind[784]: New session 38 of user ceph-admin.
Dec  3 18:03:26 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Dec  3 18:03:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:03:26 compute-0 auditd[702]: Audit daemon rotating log files
Dec  3 18:03:26 compute-0 systemd-logind[784]: New session 39 of user ceph-admin.
Dec  3 18:03:26 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Dec  3 18:03:27 compute-0 systemd-logind[784]: New session 40 of user ceph-admin.
Dec  3 18:03:27 compute-0 systemd[1]: Started Session 40 of User ceph-admin.
Dec  3 18:03:27 compute-0 ceph-mgr[193091]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 18:03:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 18:03:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:27 compute-0 ceph-mgr[193091]: [cephadm INFO root] Added host compute-0
Dec  3 18:03:27 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  3 18:03:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 18:03:27 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 18:03:27 compute-0 brave_ganguly[194586]: Added host 'compute-0' with addr '192.168.122.100'
Dec  3 18:03:28 compute-0 systemd[1]: libpod-36909a2cd843afed52e8a6d3571ca4adbf629e2bb26b031ff83a3d95d4deda3c.scope: Deactivated successfully.
Dec  3 18:03:28 compute-0 podman[194573]: 2025-12-03 18:03:28.001788937 +0000 UTC m=+7.629530210 container died 36909a2cd843afed52e8a6d3571ca4adbf629e2bb26b031ff83a3d95d4deda3c (image=quay.io/ceph/ceph:v18, name=brave_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9b178969104db8be40ed7bcdf9744c70e2dfe54d175d6c1be058416b6e93ea2-merged.mount: Deactivated successfully.
Dec  3 18:03:28 compute-0 podman[194573]: 2025-12-03 18:03:28.096736409 +0000 UTC m=+7.724477662 container remove 36909a2cd843afed52e8a6d3571ca4adbf629e2bb26b031ff83a3d95d4deda3c (image=quay.io/ceph/ceph:v18, name=brave_ganguly, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 18:03:28 compute-0 systemd[1]: libpod-conmon-36909a2cd843afed52e8a6d3571ca4adbf629e2bb26b031ff83a3d95d4deda3c.scope: Deactivated successfully.
Dec  3 18:03:28 compute-0 podman[195264]: 2025-12-03 18:03:28.185730988 +0000 UTC m=+0.059440139 container create a2e6dedb4b8225256e5cfc85fb93324b3a4dfed4f51af636a9c4b6a26f4a421e (image=quay.io/ceph/ceph:v18, name=fervent_galileo, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 18:03:28 compute-0 systemd[1]: Started libpod-conmon-a2e6dedb4b8225256e5cfc85fb93324b3a4dfed4f51af636a9c4b6a26f4a421e.scope.
Dec  3 18:03:28 compute-0 podman[195264]: 2025-12-03 18:03:28.161915085 +0000 UTC m=+0.035624246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8f7f290f85b30ea86f123691eecd8c53c742d906e33c1a6f42185be2987a37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8f7f290f85b30ea86f123691eecd8c53c742d906e33c1a6f42185be2987a37/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8f7f290f85b30ea86f123691eecd8c53c742d906e33c1a6f42185be2987a37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:28 compute-0 podman[195264]: 2025-12-03 18:03:28.315259642 +0000 UTC m=+0.188968803 container init a2e6dedb4b8225256e5cfc85fb93324b3a4dfed4f51af636a9c4b6a26f4a421e (image=quay.io/ceph/ceph:v18, name=fervent_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:03:28 compute-0 podman[195264]: 2025-12-03 18:03:28.324000012 +0000 UTC m=+0.197709163 container start a2e6dedb4b8225256e5cfc85fb93324b3a4dfed4f51af636a9c4b6a26f4a421e (image=quay.io/ceph/ceph:v18, name=fervent_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:03:28 compute-0 podman[195264]: 2025-12-03 18:03:28.329546245 +0000 UTC m=+0.203255446 container attach a2e6dedb4b8225256e5cfc85fb93324b3a4dfed4f51af636a9c4b6a26f4a421e (image=quay.io/ceph/ceph:v18, name=fervent_galileo, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:28 compute-0 podman[195392]: 2025-12-03 18:03:28.757597965 +0000 UTC m=+0.071665093 container create 28e038622cdbea965cf3d9eb4fe4f4c27c4705d45fa8240af01cfe62a26dfa7c (image=quay.io/ceph/ceph:v18, name=awesome_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:03:28 compute-0 podman[195392]: 2025-12-03 18:03:28.7257946 +0000 UTC m=+0.039861788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:28 compute-0 systemd[1]: Started libpod-conmon-28e038622cdbea965cf3d9eb4fe4f4c27c4705d45fa8240af01cfe62a26dfa7c.scope.
Dec  3 18:03:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:28 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:28 compute-0 ceph-mgr[193091]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec  3 18:03:28 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec  3 18:03:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  3 18:03:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:28 compute-0 fervent_galileo[195318]: Scheduled mon update...
Dec  3 18:03:28 compute-0 podman[195392]: 2025-12-03 18:03:28.906530975 +0000 UTC m=+0.220598183 container init 28e038622cdbea965cf3d9eb4fe4f4c27c4705d45fa8240af01cfe62a26dfa7c (image=quay.io/ceph/ceph:v18, name=awesome_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:03:28 compute-0 podman[195392]: 2025-12-03 18:03:28.922384677 +0000 UTC m=+0.236451835 container start 28e038622cdbea965cf3d9eb4fe4f4c27c4705d45fa8240af01cfe62a26dfa7c (image=quay.io/ceph/ceph:v18, name=awesome_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:03:28 compute-0 systemd[1]: libpod-a2e6dedb4b8225256e5cfc85fb93324b3a4dfed4f51af636a9c4b6a26f4a421e.scope: Deactivated successfully.
Dec  3 18:03:28 compute-0 podman[195392]: 2025-12-03 18:03:28.928063613 +0000 UTC m=+0.242130741 container attach 28e038622cdbea965cf3d9eb4fe4f4c27c4705d45fa8240af01cfe62a26dfa7c (image=quay.io/ceph/ceph:v18, name=awesome_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:03:28 compute-0 podman[195264]: 2025-12-03 18:03:28.932815647 +0000 UTC m=+0.806524798 container died a2e6dedb4b8225256e5cfc85fb93324b3a4dfed4f51af636a9c4b6a26f4a421e (image=quay.io/ceph/ceph:v18, name=fervent_galileo, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:03:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:28 compute-0 ceph-mon[192802]: Added host compute-0
Dec  3 18:03:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a8f7f290f85b30ea86f123691eecd8c53c742d906e33c1a6f42185be2987a37-merged.mount: Deactivated successfully.
Dec  3 18:03:29 compute-0 podman[195264]: 2025-12-03 18:03:29.016430447 +0000 UTC m=+0.890139578 container remove a2e6dedb4b8225256e5cfc85fb93324b3a4dfed4f51af636a9c4b6a26f4a421e (image=quay.io/ceph/ceph:v18, name=fervent_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:29 compute-0 systemd[1]: libpod-conmon-a2e6dedb4b8225256e5cfc85fb93324b3a4dfed4f51af636a9c4b6a26f4a421e.scope: Deactivated successfully.
Dec  3 18:03:29 compute-0 podman[195429]: 2025-12-03 18:03:29.116522523 +0000 UTC m=+0.071312056 container create ad59d5076f22b977a7c0c9ded39e3180e2633b397c60d80fb61af1e70f73c87a (image=quay.io/ceph/ceph:v18, name=friendly_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:03:29 compute-0 systemd[1]: Started libpod-conmon-ad59d5076f22b977a7c0c9ded39e3180e2633b397c60d80fb61af1e70f73c87a.scope.
Dec  3 18:03:29 compute-0 podman[195429]: 2025-12-03 18:03:29.087438023 +0000 UTC m=+0.042227656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3539e9f2435450eca8b1738702e03fb3590dba43c1fc27e1b20c25b88359bcbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3539e9f2435450eca8b1738702e03fb3590dba43c1fc27e1b20c25b88359bcbc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3539e9f2435450eca8b1738702e03fb3590dba43c1fc27e1b20c25b88359bcbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:29 compute-0 podman[195429]: 2025-12-03 18:03:29.240128074 +0000 UTC m=+0.194917617 container init ad59d5076f22b977a7c0c9ded39e3180e2633b397c60d80fb61af1e70f73c87a (image=quay.io/ceph/ceph:v18, name=friendly_swirles, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:29 compute-0 podman[195429]: 2025-12-03 18:03:29.252183994 +0000 UTC m=+0.206973537 container start ad59d5076f22b977a7c0c9ded39e3180e2633b397c60d80fb61af1e70f73c87a (image=quay.io/ceph/ceph:v18, name=friendly_swirles, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:29 compute-0 podman[195429]: 2025-12-03 18:03:29.257514362 +0000 UTC m=+0.212303945 container attach ad59d5076f22b977a7c0c9ded39e3180e2633b397c60d80fb61af1e70f73c87a (image=quay.io/ceph/ceph:v18, name=friendly_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 18:03:29 compute-0 awesome_rhodes[195408]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec  3 18:03:29 compute-0 systemd[1]: libpod-28e038622cdbea965cf3d9eb4fe4f4c27c4705d45fa8240af01cfe62a26dfa7c.scope: Deactivated successfully.
Dec  3 18:03:29 compute-0 podman[195392]: 2025-12-03 18:03:29.269312535 +0000 UTC m=+0.583379663 container died 28e038622cdbea965cf3d9eb4fe4f4c27c4705d45fa8240af01cfe62a26dfa7c (image=quay.io/ceph/ceph:v18, name=awesome_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-68e4e657451b1ceb68bd6957f1c15476f73fa011b26743683dca874a2ff13edc-merged.mount: Deactivated successfully.
Dec  3 18:03:29 compute-0 podman[195392]: 2025-12-03 18:03:29.355749813 +0000 UTC m=+0.669816931 container remove 28e038622cdbea965cf3d9eb4fe4f4c27c4705d45fa8240af01cfe62a26dfa7c (image=quay.io/ceph/ceph:v18, name=awesome_rhodes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:29 compute-0 systemd[1]: libpod-conmon-28e038622cdbea965cf3d9eb4fe4f4c27c4705d45fa8240af01cfe62a26dfa7c.scope: Deactivated successfully.
Dec  3 18:03:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Dec  3 18:03:29 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:29 compute-0 podman[158200]: time="2025-12-03T18:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:03:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23481 "" "Go-http-client/1.1"
Dec  3 18:03:29 compute-0 ceph-mgr[193091]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 18:03:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4357 "" "Go-http-client/1.1"
Dec  3 18:03:29 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:29 compute-0 ceph-mgr[193091]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec  3 18:03:29 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec  3 18:03:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 18:03:29 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:29 compute-0 friendly_swirles[195446]: Scheduled mgr update...
Dec  3 18:03:29 compute-0 systemd[1]: libpod-ad59d5076f22b977a7c0c9ded39e3180e2633b397c60d80fb61af1e70f73c87a.scope: Deactivated successfully.
Dec  3 18:03:29 compute-0 podman[195429]: 2025-12-03 18:03:29.90680713 +0000 UTC m=+0.861596703 container died ad59d5076f22b977a7c0c9ded39e3180e2633b397c60d80fb61af1e70f73c87a (image=quay.io/ceph/ceph:v18, name=friendly_swirles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3539e9f2435450eca8b1738702e03fb3590dba43c1fc27e1b20c25b88359bcbc-merged.mount: Deactivated successfully.
Dec  3 18:03:29 compute-0 podman[195429]: 2025-12-03 18:03:29.984319673 +0000 UTC m=+0.939109246 container remove ad59d5076f22b977a7c0c9ded39e3180e2633b397c60d80fb61af1e70f73c87a (image=quay.io/ceph/ceph:v18, name=friendly_swirles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:03:30 compute-0 systemd[1]: libpod-conmon-ad59d5076f22b977a7c0c9ded39e3180e2633b397c60d80fb61af1e70f73c87a.scope: Deactivated successfully.
Dec  3 18:03:30 compute-0 podman[195599]: 2025-12-03 18:03:30.069778247 +0000 UTC m=+0.056601741 container create 01135cc9423780c27cce8e397076f811a005f951a0203b7b0e6bd2079009efa9 (image=quay.io/ceph/ceph:v18, name=nifty_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 18:03:30 compute-0 systemd[1]: Started libpod-conmon-01135cc9423780c27cce8e397076f811a005f951a0203b7b0e6bd2079009efa9.scope.
Dec  3 18:03:30 compute-0 podman[195599]: 2025-12-03 18:03:30.042874631 +0000 UTC m=+0.029698155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3396caf84205395aed0e6d734ec61c9621515e100e6c4ee1900d218408dd37c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3396caf84205395aed0e6d734ec61c9621515e100e6c4ee1900d218408dd37c5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3396caf84205395aed0e6d734ec61c9621515e100e6c4ee1900d218408dd37c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:30 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:30 compute-0 podman[195599]: 2025-12-03 18:03:30.191753019 +0000 UTC m=+0.178576593 container init 01135cc9423780c27cce8e397076f811a005f951a0203b7b0e6bd2079009efa9 (image=quay.io/ceph/ceph:v18, name=nifty_gauss, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:30 compute-0 podman[195599]: 2025-12-03 18:03:30.206394582 +0000 UTC m=+0.193218066 container start 01135cc9423780c27cce8e397076f811a005f951a0203b7b0e6bd2079009efa9 (image=quay.io/ceph/ceph:v18, name=nifty_gauss, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 18:03:30 compute-0 podman[195599]: 2025-12-03 18:03:30.211290299 +0000 UTC m=+0.198113863 container attach 01135cc9423780c27cce8e397076f811a005f951a0203b7b0e6bd2079009efa9 (image=quay.io/ceph/ceph:v18, name=nifty_gauss, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 18:03:30 compute-0 ceph-mon[192802]: Saving service mon spec with placement count:5
Dec  3 18:03:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:30 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:30 compute-0 ceph-mgr[193091]: [cephadm INFO root] Saving service crash spec with placement *
Dec  3 18:03:30 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec  3 18:03:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  3 18:03:30 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:30 compute-0 nifty_gauss[195628]: Scheduled crash update...
Dec  3 18:03:30 compute-0 systemd[1]: libpod-01135cc9423780c27cce8e397076f811a005f951a0203b7b0e6bd2079009efa9.scope: Deactivated successfully.
Dec  3 18:03:30 compute-0 podman[195599]: 2025-12-03 18:03:30.823423013 +0000 UTC m=+0.810246507 container died 01135cc9423780c27cce8e397076f811a005f951a0203b7b0e6bd2079009efa9 (image=quay.io/ceph/ceph:v18, name=nifty_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:03:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3396caf84205395aed0e6d734ec61c9621515e100e6c4ee1900d218408dd37c5-merged.mount: Deactivated successfully.
Dec  3 18:03:30 compute-0 podman[195599]: 2025-12-03 18:03:30.882640997 +0000 UTC m=+0.869464491 container remove 01135cc9423780c27cce8e397076f811a005f951a0203b7b0e6bd2079009efa9 (image=quay.io/ceph/ceph:v18, name=nifty_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:03:30 compute-0 systemd[1]: libpod-conmon-01135cc9423780c27cce8e397076f811a005f951a0203b7b0e6bd2079009efa9.scope: Deactivated successfully.
Dec  3 18:03:30 compute-0 podman[195786]: 2025-12-03 18:03:30.971100883 +0000 UTC m=+0.065716870 container create 2fd3fdca8437d8d42f66cb0c1354e74c250135d5bfa6e2ab9999c979ad65e9e9 (image=quay.io/ceph/ceph:v18, name=clever_lalande, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:03:31 compute-0 systemd[1]: Started libpod-conmon-2fd3fdca8437d8d42f66cb0c1354e74c250135d5bfa6e2ab9999c979ad65e9e9.scope.
Dec  3 18:03:31 compute-0 podman[195786]: 2025-12-03 18:03:30.94019454 +0000 UTC m=+0.034810597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873c8b83a8401013e2bbec3f0a558c7464680d2c67231a6c53c4f91515466350/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873c8b83a8401013e2bbec3f0a558c7464680d2c67231a6c53c4f91515466350/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873c8b83a8401013e2bbec3f0a558c7464680d2c67231a6c53c4f91515466350/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:31 compute-0 podman[195786]: 2025-12-03 18:03:31.083127796 +0000 UTC m=+0.177743803 container init 2fd3fdca8437d8d42f66cb0c1354e74c250135d5bfa6e2ab9999c979ad65e9e9 (image=quay.io/ceph/ceph:v18, name=clever_lalande, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:03:31 compute-0 podman[195786]: 2025-12-03 18:03:31.110183486 +0000 UTC m=+0.204799473 container start 2fd3fdca8437d8d42f66cb0c1354e74c250135d5bfa6e2ab9999c979ad65e9e9 (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 18:03:31 compute-0 podman[195786]: 2025-12-03 18:03:31.114622873 +0000 UTC m=+0.209238860 container attach 2fd3fdca8437d8d42f66cb0c1354e74c250135d5bfa6e2ab9999c979ad65e9e9 (image=quay.io/ceph/ceph:v18, name=clever_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 18:03:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:03:31 compute-0 openstack_network_exporter[160319]: ERROR   18:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:03:31 compute-0 openstack_network_exporter[160319]: ERROR   18:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:03:31 compute-0 openstack_network_exporter[160319]: ERROR   18:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:03:31 compute-0 openstack_network_exporter[160319]: ERROR   18:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:03:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:03:31 compute-0 openstack_network_exporter[160319]: ERROR   18:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:03:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:03:31 compute-0 ceph-mon[192802]: Saving service mgr spec with placement count:2
Dec  3 18:03:31 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:31 compute-0 podman[195850]: 2025-12-03 18:03:31.491332548 +0000 UTC m=+0.109745268 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 18:03:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Dec  3 18:03:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/677890615' entity='client.admin' 
Dec  3 18:03:31 compute-0 ceph-mgr[193091]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 18:03:31 compute-0 podman[195786]: 2025-12-03 18:03:31.764849684 +0000 UTC m=+0.859465671 container died 2fd3fdca8437d8d42f66cb0c1354e74c250135d5bfa6e2ab9999c979ad65e9e9 (image=quay.io/ceph/ceph:v18, name=clever_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:03:31 compute-0 systemd[1]: libpod-2fd3fdca8437d8d42f66cb0c1354e74c250135d5bfa6e2ab9999c979ad65e9e9.scope: Deactivated successfully.
Dec  3 18:03:31 compute-0 podman[195850]: 2025-12-03 18:03:31.796268908 +0000 UTC m=+0.414681538 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-873c8b83a8401013e2bbec3f0a558c7464680d2c67231a6c53c4f91515466350-merged.mount: Deactivated successfully.
Dec  3 18:03:31 compute-0 podman[195786]: 2025-12-03 18:03:31.855903982 +0000 UTC m=+0.950519959 container remove 2fd3fdca8437d8d42f66cb0c1354e74c250135d5bfa6e2ab9999c979ad65e9e9 (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:03:31 compute-0 systemd[1]: libpod-conmon-2fd3fdca8437d8d42f66cb0c1354e74c250135d5bfa6e2ab9999c979ad65e9e9.scope: Deactivated successfully.
Dec  3 18:03:31 compute-0 podman[195916]: 2025-12-03 18:03:31.970132238 +0000 UTC m=+0.080192019 container create bf860b5a22a82699feac26a79e44fbe1c7910a33c6bb7b60e0e424740bb6553a (image=quay.io/ceph/ceph:v18, name=flamboyant_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:03:32 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:32 compute-0 systemd[1]: Started libpod-conmon-bf860b5a22a82699feac26a79e44fbe1c7910a33c6bb7b60e0e424740bb6553a.scope.
Dec  3 18:03:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:32 compute-0 podman[195916]: 2025-12-03 18:03:31.948197081 +0000 UTC m=+0.058256862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec9e164158b5e62cf786e092e8221dcc37cbfc56f5edce06f27545641f071c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec9e164158b5e62cf786e092e8221dcc37cbfc56f5edce06f27545641f071c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec9e164158b5e62cf786e092e8221dcc37cbfc56f5edce06f27545641f071c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:32 compute-0 podman[195916]: 2025-12-03 18:03:32.076194817 +0000 UTC m=+0.186254588 container init bf860b5a22a82699feac26a79e44fbe1c7910a33c6bb7b60e0e424740bb6553a (image=quay.io/ceph/ceph:v18, name=flamboyant_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:03:32 compute-0 podman[195916]: 2025-12-03 18:03:32.085691826 +0000 UTC m=+0.195751587 container start bf860b5a22a82699feac26a79e44fbe1c7910a33c6bb7b60e0e424740bb6553a (image=quay.io/ceph/ceph:v18, name=flamboyant_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:03:32 compute-0 podman[195916]: 2025-12-03 18:03:32.091375633 +0000 UTC m=+0.201435394 container attach bf860b5a22a82699feac26a79e44fbe1c7910a33c6bb7b60e0e424740bb6553a (image=quay.io/ceph/ceph:v18, name=flamboyant_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Dec  3 18:03:32 compute-0 podman[195944]: 2025-12-03 18:03:32.09626909 +0000 UTC m=+0.071968891 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:03:32 compute-0 ceph-mon[192802]: Saving service crash spec with placement *
Dec  3 18:03:32 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/677890615' entity='client.admin' 
Dec  3 18:03:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:32 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 196106 (sysctl)
Dec  3 18:03:32 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  3 18:03:32 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  3 18:03:32 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Dec  3 18:03:32 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:32 compute-0 systemd[1]: libpod-bf860b5a22a82699feac26a79e44fbe1c7910a33c6bb7b60e0e424740bb6553a.scope: Deactivated successfully.
Dec  3 18:03:32 compute-0 conmon[195948]: conmon bf860b5a22a82699feac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf860b5a22a82699feac26a79e44fbe1c7910a33c6bb7b60e0e424740bb6553a.scope/container/memory.events
Dec  3 18:03:32 compute-0 podman[196113]: 2025-12-03 18:03:32.755793473 +0000 UTC m=+0.037278576 container died bf860b5a22a82699feac26a79e44fbe1c7910a33c6bb7b60e0e424740bb6553a (image=quay.io/ceph/ceph:v18, name=flamboyant_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-cec9e164158b5e62cf786e092e8221dcc37cbfc56f5edce06f27545641f071c8-merged.mount: Deactivated successfully.
Dec  3 18:03:32 compute-0 podman[196113]: 2025-12-03 18:03:32.817736143 +0000 UTC m=+0.099221176 container remove bf860b5a22a82699feac26a79e44fbe1c7910a33c6bb7b60e0e424740bb6553a (image=quay.io/ceph/ceph:v18, name=flamboyant_elion, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Dec  3 18:03:32 compute-0 systemd[1]: libpod-conmon-bf860b5a22a82699feac26a79e44fbe1c7910a33c6bb7b60e0e424740bb6553a.scope: Deactivated successfully.
Dec  3 18:03:32 compute-0 podman[196132]: 2025-12-03 18:03:32.93867092 +0000 UTC m=+0.067190916 container create f87cc315296b8122f617da250617774bb085994576be5810b53a8ea92efc76ec (image=quay.io/ceph/ceph:v18, name=heuristic_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:03:33 compute-0 systemd[1]: Started libpod-conmon-f87cc315296b8122f617da250617774bb085994576be5810b53a8ea92efc76ec.scope.
Dec  3 18:03:33 compute-0 podman[196132]: 2025-12-03 18:03:32.911386164 +0000 UTC m=+0.039906190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76dadc6cebb74ac7dc7b05f98191e22b6cfe773532bf0592e078e9967ee860d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76dadc6cebb74ac7dc7b05f98191e22b6cfe773532bf0592e078e9967ee860d7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76dadc6cebb74ac7dc7b05f98191e22b6cfe773532bf0592e078e9967ee860d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:33 compute-0 podman[196132]: 2025-12-03 18:03:33.075415977 +0000 UTC m=+0.203936003 container init f87cc315296b8122f617da250617774bb085994576be5810b53a8ea92efc76ec (image=quay.io/ceph/ceph:v18, name=heuristic_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:03:33 compute-0 podman[196132]: 2025-12-03 18:03:33.089398493 +0000 UTC m=+0.217918489 container start f87cc315296b8122f617da250617774bb085994576be5810b53a8ea92efc76ec (image=quay.io/ceph/ceph:v18, name=heuristic_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:33 compute-0 podman[196132]: 2025-12-03 18:03:33.09508293 +0000 UTC m=+0.223602916 container attach f87cc315296b8122f617da250617774bb085994576be5810b53a8ea92efc76ec (image=quay.io/ceph/ceph:v18, name=heuristic_tu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:03:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:33 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:33 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:33 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 18:03:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:33 compute-0 ceph-mgr[193091]: [cephadm INFO root] Added label _admin to host compute-0
Dec  3 18:03:33 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec  3 18:03:33 compute-0 heuristic_tu[196162]: Added label _admin to host compute-0
Dec  3 18:03:33 compute-0 systemd[1]: libpod-f87cc315296b8122f617da250617774bb085994576be5810b53a8ea92efc76ec.scope: Deactivated successfully.
Dec  3 18:03:33 compute-0 podman[196132]: 2025-12-03 18:03:33.729713275 +0000 UTC m=+0.858233271 container died f87cc315296b8122f617da250617774bb085994576be5810b53a8ea92efc76ec (image=quay.io/ceph/ceph:v18, name=heuristic_tu, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:33 compute-0 ceph-mgr[193091]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec  3 18:03:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:33 compute-0 ceph-mon[192802]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  3 18:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-76dadc6cebb74ac7dc7b05f98191e22b6cfe773532bf0592e078e9967ee860d7-merged.mount: Deactivated successfully.
Dec  3 18:03:33 compute-0 podman[196132]: 2025-12-03 18:03:33.854927194 +0000 UTC m=+0.983447200 container remove f87cc315296b8122f617da250617774bb085994576be5810b53a8ea92efc76ec (image=quay.io/ceph/ceph:v18, name=heuristic_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:03:33 compute-0 systemd[1]: libpod-conmon-f87cc315296b8122f617da250617774bb085994576be5810b53a8ea92efc76ec.scope: Deactivated successfully.
Dec  3 18:03:33 compute-0 podman[196356]: 2025-12-03 18:03:33.947853079 +0000 UTC m=+0.063426846 container create 1a90989a49aaf54b7e6394b02ba41263e07a43d09600914b4fc443805b4bee8e (image=quay.io/ceph/ceph:v18, name=competent_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:33 compute-0 systemd[1]: Started libpod-conmon-1a90989a49aaf54b7e6394b02ba41263e07a43d09600914b4fc443805b4bee8e.scope.
Dec  3 18:03:34 compute-0 podman[196356]: 2025-12-03 18:03:33.922691504 +0000 UTC m=+0.038265311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef30b847ef57b010933e4b45d960d405a5e47bfde237cf5b152eeac8e07d881c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef30b847ef57b010933e4b45d960d405a5e47bfde237cf5b152eeac8e07d881c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef30b847ef57b010933e4b45d960d405a5e47bfde237cf5b152eeac8e07d881c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:34 compute-0 podman[196356]: 2025-12-03 18:03:34.070980108 +0000 UTC m=+0.186553865 container init 1a90989a49aaf54b7e6394b02ba41263e07a43d09600914b4fc443805b4bee8e (image=quay.io/ceph/ceph:v18, name=competent_bouman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:03:34 compute-0 podman[196356]: 2025-12-03 18:03:34.079434412 +0000 UTC m=+0.195008159 container start 1a90989a49aaf54b7e6394b02ba41263e07a43d09600914b4fc443805b4bee8e (image=quay.io/ceph/ceph:v18, name=competent_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:03:34 compute-0 podman[196356]: 2025-12-03 18:03:34.084884523 +0000 UTC m=+0.200458270 container attach 1a90989a49aaf54b7e6394b02ba41263e07a43d09600914b4fc443805b4bee8e (image=quay.io/ceph/ceph:v18, name=competent_bouman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:03:34 compute-0 podman[196490]: 2025-12-03 18:03:34.546412646 +0000 UTC m=+0.069006809 container create 371b87de5811e9bd435b16f1956bfe29a1b6bda8ce442bc059ba461125b2301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:03:34 compute-0 systemd[1]: Started libpod-conmon-371b87de5811e9bd435b16f1956bfe29a1b6bda8ce442bc059ba461125b2301c.scope.
Dec  3 18:03:34 compute-0 podman[196490]: 2025-12-03 18:03:34.515556195 +0000 UTC m=+0.038150358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:03:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Dec  3 18:03:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:34 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1307238047' entity='client.admin' 
Dec  3 18:03:34 compute-0 podman[196490]: 2025-12-03 18:03:34.65387979 +0000 UTC m=+0.176473953 container init 371b87de5811e9bd435b16f1956bfe29a1b6bda8ce442bc059ba461125b2301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:34 compute-0 podman[196490]: 2025-12-03 18:03:34.662910408 +0000 UTC m=+0.185504541 container start 371b87de5811e9bd435b16f1956bfe29a1b6bda8ce442bc059ba461125b2301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 18:03:34 compute-0 systemd[1]: libpod-1a90989a49aaf54b7e6394b02ba41263e07a43d09600914b4fc443805b4bee8e.scope: Deactivated successfully.
Dec  3 18:03:34 compute-0 conmon[196403]: conmon 1a90989a49aaf54b7e63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a90989a49aaf54b7e6394b02ba41263e07a43d09600914b4fc443805b4bee8e.scope/container/memory.events
Dec  3 18:03:34 compute-0 podman[196356]: 2025-12-03 18:03:34.668221154 +0000 UTC m=+0.783794901 container died 1a90989a49aaf54b7e6394b02ba41263e07a43d09600914b4fc443805b4bee8e (image=quay.io/ceph/ceph:v18, name=competent_bouman, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:34 compute-0 interesting_jackson[196506]: 167 167
Dec  3 18:03:34 compute-0 podman[196490]: 2025-12-03 18:03:34.668677346 +0000 UTC m=+0.191271499 container attach 371b87de5811e9bd435b16f1956bfe29a1b6bda8ce442bc059ba461125b2301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:03:34 compute-0 systemd[1]: libpod-371b87de5811e9bd435b16f1956bfe29a1b6bda8ce442bc059ba461125b2301c.scope: Deactivated successfully.
Dec  3 18:03:34 compute-0 conmon[196506]: conmon 371b87de5811e9bd435b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-371b87de5811e9bd435b16f1956bfe29a1b6bda8ce442bc059ba461125b2301c.scope/container/memory.events
Dec  3 18:03:34 compute-0 podman[196490]: 2025-12-03 18:03:34.680618043 +0000 UTC m=+0.203212276 container died 371b87de5811e9bd435b16f1956bfe29a1b6bda8ce442bc059ba461125b2301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 18:03:34 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:34 compute-0 ceph-mon[192802]: Added label _admin to host compute-0
Dec  3 18:03:34 compute-0 ceph-mon[192802]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  3 18:03:34 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1307238047' entity='client.admin' 
Dec  3 18:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef30b847ef57b010933e4b45d960d405a5e47bfde237cf5b152eeac8e07d881c-merged.mount: Deactivated successfully.
Dec  3 18:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b8a86b74ae84284dbe4862e61c279c0abf92201588588d09e9d82b3f3f47c4b-merged.mount: Deactivated successfully.
Dec  3 18:03:34 compute-0 podman[196490]: 2025-12-03 18:03:34.764164741 +0000 UTC m=+0.286758894 container remove 371b87de5811e9bd435b16f1956bfe29a1b6bda8ce442bc059ba461125b2301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:03:34 compute-0 podman[196356]: 2025-12-03 18:03:34.77576359 +0000 UTC m=+0.891337347 container remove 1a90989a49aaf54b7e6394b02ba41263e07a43d09600914b4fc443805b4bee8e (image=quay.io/ceph/ceph:v18, name=competent_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:03:34 compute-0 systemd[1]: libpod-conmon-371b87de5811e9bd435b16f1956bfe29a1b6bda8ce442bc059ba461125b2301c.scope: Deactivated successfully.
Dec  3 18:03:34 compute-0 systemd[1]: libpod-conmon-1a90989a49aaf54b7e6394b02ba41263e07a43d09600914b4fc443805b4bee8e.scope: Deactivated successfully.
Dec  3 18:03:34 compute-0 podman[196537]: 2025-12-03 18:03:34.898189973 +0000 UTC m=+0.093801536 container create e56c8bf6807ae1061beeb1b9414050643934e58badb552d67782c9fa27855a61 (image=quay.io/ceph/ceph:v18, name=kind_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:34 compute-0 podman[196537]: 2025-12-03 18:03:34.846116391 +0000 UTC m=+0.041727964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:34 compute-0 systemd[1]: Started libpod-conmon-e56c8bf6807ae1061beeb1b9414050643934e58badb552d67782c9fa27855a61.scope.
Dec  3 18:03:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a156e8b4ffa594f34b68c27309ddd889c71fe2d451f4816efabf85b4ae143177/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a156e8b4ffa594f34b68c27309ddd889c71fe2d451f4816efabf85b4ae143177/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a156e8b4ffa594f34b68c27309ddd889c71fe2d451f4816efabf85b4ae143177/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:35 compute-0 podman[196537]: 2025-12-03 18:03:35.310539755 +0000 UTC m=+0.506151338 container init e56c8bf6807ae1061beeb1b9414050643934e58badb552d67782c9fa27855a61 (image=quay.io/ceph/ceph:v18, name=kind_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:03:35 compute-0 podman[196537]: 2025-12-03 18:03:35.326588021 +0000 UTC m=+0.522199584 container start e56c8bf6807ae1061beeb1b9414050643934e58badb552d67782c9fa27855a61 (image=quay.io/ceph/ceph:v18, name=kind_rubin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:35 compute-0 podman[196537]: 2025-12-03 18:03:35.331527629 +0000 UTC m=+0.527139252 container attach e56c8bf6807ae1061beeb1b9414050643934e58badb552d67782c9fa27855a61 (image=quay.io/ceph/ceph:v18, name=kind_rubin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:03:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Dec  3 18:03:36 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4159417208' entity='client.admin' 
Dec  3 18:03:36 compute-0 kind_rubin[196553]: set mgr/dashboard/cluster/status
Dec  3 18:03:36 compute-0 systemd[1]: libpod-e56c8bf6807ae1061beeb1b9414050643934e58badb552d67782c9fa27855a61.scope: Deactivated successfully.
Dec  3 18:03:36 compute-0 podman[196579]: 2025-12-03 18:03:36.170971198 +0000 UTC m=+0.055624918 container died e56c8bf6807ae1061beeb1b9414050643934e58badb552d67782c9fa27855a61 (image=quay.io/ceph/ceph:v18, name=kind_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:03:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a156e8b4ffa594f34b68c27309ddd889c71fe2d451f4816efabf85b4ae143177-merged.mount: Deactivated successfully.
Dec  3 18:03:36 compute-0 podman[196579]: 2025-12-03 18:03:36.237729782 +0000 UTC m=+0.122383462 container remove e56c8bf6807ae1061beeb1b9414050643934e58badb552d67782c9fa27855a61 (image=quay.io/ceph/ceph:v18, name=kind_rubin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:36 compute-0 systemd[1]: libpod-conmon-e56c8bf6807ae1061beeb1b9414050643934e58badb552d67782c9fa27855a61.scope: Deactivated successfully.
Dec  3 18:03:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:03:36 compute-0 podman[196600]: 2025-12-03 18:03:36.488311816 +0000 UTC m=+0.036018516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:03:36 compute-0 podman[196600]: 2025-12-03 18:03:36.703033648 +0000 UTC m=+0.250740308 container create 64674ba9e4766d2263cfcde5e2ebefc31164cb2c08bbef63d25cce8fa474f2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:03:36 compute-0 systemd[1]: Started libpod-conmon-64674ba9e4766d2263cfcde5e2ebefc31164cb2c08bbef63d25cce8fa474f2c5.scope.
Dec  3 18:03:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fdc1a7bb54969a1594a82084208aaff4a2fbce30418841138f4a202364ed07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fdc1a7bb54969a1594a82084208aaff4a2fbce30418841138f4a202364ed07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fdc1a7bb54969a1594a82084208aaff4a2fbce30418841138f4a202364ed07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fdc1a7bb54969a1594a82084208aaff4a2fbce30418841138f4a202364ed07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:36 compute-0 podman[196600]: 2025-12-03 18:03:36.845847431 +0000 UTC m=+0.393554101 container init 64674ba9e4766d2263cfcde5e2ebefc31164cb2c08bbef63d25cce8fa474f2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 18:03:36 compute-0 podman[196600]: 2025-12-03 18:03:36.859619321 +0000 UTC m=+0.407325981 container start 64674ba9e4766d2263cfcde5e2ebefc31164cb2c08bbef63d25cce8fa474f2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 18:03:36 compute-0 podman[196600]: 2025-12-03 18:03:36.86409283 +0000 UTC m=+0.411799540 container attach 64674ba9e4766d2263cfcde5e2ebefc31164cb2c08bbef63d25cce8fa474f2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:36 compute-0 python3[196640]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:03:37 compute-0 podman[196646]: 2025-12-03 18:03:37.024746891 +0000 UTC m=+0.097994817 container create a5f7b3530f1516bb5bc1710ff07c54b65993d5dde0982b562f1511349cdcef93 (image=quay.io/ceph/ceph:v18, name=infallible_snyder, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 18:03:37 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/4159417208' entity='client.admin' 
Dec  3 18:03:37 compute-0 podman[196646]: 2025-12-03 18:03:36.984029932 +0000 UTC m=+0.057277908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:37 compute-0 systemd[1]: Started libpod-conmon-a5f7b3530f1516bb5bc1710ff07c54b65993d5dde0982b562f1511349cdcef93.scope.
Dec  3 18:03:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc363f2abae1a0333ac7c418f06afb31ab2c6a8f534872b45fbdb4b29cce2b88/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc363f2abae1a0333ac7c418f06afb31ab2c6a8f534872b45fbdb4b29cce2b88/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:37 compute-0 podman[196646]: 2025-12-03 18:03:37.159761037 +0000 UTC m=+0.233008953 container init a5f7b3530f1516bb5bc1710ff07c54b65993d5dde0982b562f1511349cdcef93 (image=quay.io/ceph/ceph:v18, name=infallible_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:03:37 compute-0 podman[196646]: 2025-12-03 18:03:37.168790073 +0000 UTC m=+0.242037969 container start a5f7b3530f1516bb5bc1710ff07c54b65993d5dde0982b562f1511349cdcef93 (image=quay.io/ceph/ceph:v18, name=infallible_snyder, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:03:37 compute-0 podman[196646]: 2025-12-03 18:03:37.176201492 +0000 UTC m=+0.249449378 container attach a5f7b3530f1516bb5bc1710ff07c54b65993d5dde0982b562f1511349cdcef93 (image=quay.io/ceph/ceph:v18, name=infallible_snyder, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Dec  3 18:03:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2124627947' entity='client.admin' 
Dec  3 18:03:37 compute-0 systemd[1]: libpod-a5f7b3530f1516bb5bc1710ff07c54b65993d5dde0982b562f1511349cdcef93.scope: Deactivated successfully.
Dec  3 18:03:37 compute-0 podman[196696]: 2025-12-03 18:03:37.918073924 +0000 UTC m=+0.045769086 container died a5f7b3530f1516bb5bc1710ff07c54b65993d5dde0982b562f1511349cdcef93 (image=quay.io/ceph/ceph:v18, name=infallible_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc363f2abae1a0333ac7c418f06afb31ab2c6a8f534872b45fbdb4b29cce2b88-merged.mount: Deactivated successfully.
Dec  3 18:03:37 compute-0 podman[196696]: 2025-12-03 18:03:37.973698616 +0000 UTC m=+0.101393758 container remove a5f7b3530f1516bb5bc1710ff07c54b65993d5dde0982b562f1511349cdcef93 (image=quay.io/ceph/ceph:v18, name=infallible_snyder, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:37 compute-0 systemd[1]: libpod-conmon-a5f7b3530f1516bb5bc1710ff07c54b65993d5dde0982b562f1511349cdcef93.scope: Deactivated successfully.
Dec  3 18:03:38 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/2124627947' entity='client.admin' 
Dec  3 18:03:38 compute-0 exciting_lewin[196641]: [
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:    {
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:        "available": false,
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:        "ceph_device": false,
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:        "lsm_data": {},
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:        "lvs": [],
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:        "path": "/dev/sr0",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:        "rejected_reasons": [
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "Has a FileSystem",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "Insufficient space (<5GB)"
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:        ],
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:        "sys_api": {
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "actuators": null,
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "device_nodes": "sr0",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "devname": "sr0",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "human_readable_size": "482.00 KB",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "id_bus": "ata",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "model": "QEMU DVD-ROM",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "nr_requests": "2",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "parent": "/dev/sr0",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "partitions": {},
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "path": "/dev/sr0",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "removable": "1",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "rev": "2.5+",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "ro": "0",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "rotational": "1",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "sas_address": "",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "sas_device_handle": "",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "scheduler_mode": "mq-deadline",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "sectors": 0,
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "sectorsize": "2048",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "size": 493568.0,
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "support_discard": "2048",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "type": "disk",
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:            "vendor": "QEMU"
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:        }
Dec  3 18:03:38 compute-0 exciting_lewin[196641]:    }
Dec  3 18:03:38 compute-0 exciting_lewin[196641]: ]
Dec  3 18:03:38 compute-0 systemd[1]: libpod-64674ba9e4766d2263cfcde5e2ebefc31164cb2c08bbef63d25cce8fa474f2c5.scope: Deactivated successfully.
Dec  3 18:03:38 compute-0 systemd[1]: libpod-64674ba9e4766d2263cfcde5e2ebefc31164cb2c08bbef63d25cce8fa474f2c5.scope: Consumed 2.199s CPU time.
Dec  3 18:03:38 compute-0 podman[196600]: 2025-12-03 18:03:38.98298998 +0000 UTC m=+2.530696650 container died 64674ba9e4766d2263cfcde5e2ebefc31164cb2c08bbef63d25cce8fa474f2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9fdc1a7bb54969a1594a82084208aaff4a2fbce30418841138f4a202364ed07-merged.mount: Deactivated successfully.
Dec  3 18:03:39 compute-0 podman[196600]: 2025-12-03 18:03:39.058981099 +0000 UTC m=+2.606687799 container remove 64674ba9e4766d2263cfcde5e2ebefc31164cb2c08bbef63d25cce8fa474f2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:39 compute-0 systemd[1]: libpod-conmon-64674ba9e4766d2263cfcde5e2ebefc31164cb2c08bbef63d25cce8fa474f2c5.scope: Deactivated successfully.
Dec  3 18:03:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:03:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:03:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:03:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:03:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 18:03:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:03:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:03:39 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:03:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:03:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:03:39 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  3 18:03:39 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  3 18:03:39 compute-0 ansible-async_wrapper.py[198882]: Invoked with j676467359670 30 /home/zuul/.ansible/tmp/ansible-tmp-1764785018.4115155-37601-224495133147738/AnsiballZ_command.py _
Dec  3 18:03:39 compute-0 ansible-async_wrapper.py[198911]: Starting module and watcher
Dec  3 18:03:39 compute-0 ansible-async_wrapper.py[198911]: Start watching 198914 (30)
Dec  3 18:03:39 compute-0 ansible-async_wrapper.py[198914]: Start module (198914)
Dec  3 18:03:39 compute-0 ansible-async_wrapper.py[198882]: Return async_wrapper task started.
Dec  3 18:03:39 compute-0 python3[198916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:03:39 compute-0 podman[198962]: 2025-12-03 18:03:39.49632338 +0000 UTC m=+0.072424915 container create 5e9f6ae8464ede95ec25e2b4e222e9b1638f573ec2991d9fd05b367a4f796041 (image=quay.io/ceph/ceph:v18, name=affectionate_sanderson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:39 compute-0 systemd[1]: Started libpod-conmon-5e9f6ae8464ede95ec25e2b4e222e9b1638f573ec2991d9fd05b367a4f796041.scope.
Dec  3 18:03:39 compute-0 podman[198962]: 2025-12-03 18:03:39.465487182 +0000 UTC m=+0.041588767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa84d4847f9484646069e991b36b7104033aa595d18799936999240761b4745f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa84d4847f9484646069e991b36b7104033aa595d18799936999240761b4745f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:39 compute-0 podman[198962]: 2025-12-03 18:03:39.623675329 +0000 UTC m=+0.199776864 container init 5e9f6ae8464ede95ec25e2b4e222e9b1638f573ec2991d9fd05b367a4f796041 (image=quay.io/ceph/ceph:v18, name=affectionate_sanderson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:03:39 compute-0 podman[198962]: 2025-12-03 18:03:39.635955432 +0000 UTC m=+0.212056957 container start 5e9f6ae8464ede95ec25e2b4e222e9b1638f573ec2991d9fd05b367a4f796041 (image=quay.io/ceph/ceph:v18, name=affectionate_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:03:39 compute-0 podman[198962]: 2025-12-03 18:03:39.640312056 +0000 UTC m=+0.216413591 container attach 5e9f6ae8464ede95ec25e2b4e222e9b1638f573ec2991d9fd05b367a4f796041 (image=quay.io/ceph/ceph:v18, name=affectionate_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:03:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:03:40 compute-0 ceph-mon[192802]: Updating compute-0:/etc/ceph/ceph.conf
Dec  3 18:03:40 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 18:03:40 compute-0 affectionate_sanderson[199001]: 
Dec  3 18:03:40 compute-0 affectionate_sanderson[199001]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 18:03:40 compute-0 systemd[1]: libpod-5e9f6ae8464ede95ec25e2b4e222e9b1638f573ec2991d9fd05b367a4f796041.scope: Deactivated successfully.
Dec  3 18:03:40 compute-0 podman[198962]: 2025-12-03 18:03:40.27365199 +0000 UTC m=+0.849753555 container died 5e9f6ae8464ede95ec25e2b4e222e9b1638f573ec2991d9fd05b367a4f796041 (image=quay.io/ceph/ceph:v18, name=affectionate_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa84d4847f9484646069e991b36b7104033aa595d18799936999240761b4745f-merged.mount: Deactivated successfully.
Dec  3 18:03:40 compute-0 podman[198962]: 2025-12-03 18:03:40.334008055 +0000 UTC m=+0.910109630 container remove 5e9f6ae8464ede95ec25e2b4e222e9b1638f573ec2991d9fd05b367a4f796041 (image=quay.io/ceph/ceph:v18, name=affectionate_sanderson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:03:40 compute-0 systemd[1]: libpod-conmon-5e9f6ae8464ede95ec25e2b4e222e9b1638f573ec2991d9fd05b367a4f796041.scope: Deactivated successfully.
Dec  3 18:03:40 compute-0 ansible-async_wrapper.py[198914]: Module complete (198914)
Dec  3 18:03:40 compute-0 python3[199323]: ansible-ansible.legacy.async_status Invoked with jid=j676467359670.198882 mode=status _async_dir=/root/.ansible_async
Dec  3 18:03:41 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/c1caf3ba-b2a5-5005-a11e-e955c344dccc/config/ceph.conf
Dec  3 18:03:41 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/c1caf3ba-b2a5-5005-a11e-e955c344dccc/config/ceph.conf
Dec  3 18:03:41 compute-0 python3[199458]: ansible-ansible.legacy.async_status Invoked with jid=j676467359670.198882 mode=cleanup _async_dir=/root/.ansible_async
Dec  3 18:03:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:03:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:41 compute-0 python3[199629]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 18:03:42 compute-0 ceph-mon[192802]: Updating compute-0:/var/lib/ceph/c1caf3ba-b2a5-5005-a11e-e955c344dccc/config/ceph.conf
Dec  3 18:03:42 compute-0 python3[199812]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:03:42 compute-0 podman[199855]: 2025-12-03 18:03:42.490048473 +0000 UTC m=+0.064744931 container create 34029e9eebeef5dcca418750864c0d89e060ef0bf50cbd5808d0839b71147edc (image=quay.io/ceph/ceph:v18, name=festive_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:42 compute-0 systemd[1]: Started libpod-conmon-34029e9eebeef5dcca418750864c0d89e060ef0bf50cbd5808d0839b71147edc.scope.
Dec  3 18:03:42 compute-0 podman[199855]: 2025-12-03 18:03:42.463960308 +0000 UTC m=+0.038656816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4f65995945c3cae13f48793002e3ce0e23d9fc312cc47358c2436996e745cf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4f65995945c3cae13f48793002e3ce0e23d9fc312cc47358c2436996e745cf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4f65995945c3cae13f48793002e3ce0e23d9fc312cc47358c2436996e745cf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:42 compute-0 podman[199855]: 2025-12-03 18:03:42.610941827 +0000 UTC m=+0.185638315 container init 34029e9eebeef5dcca418750864c0d89e060ef0bf50cbd5808d0839b71147edc (image=quay.io/ceph/ceph:v18, name=festive_banzai, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:42 compute-0 podman[199855]: 2025-12-03 18:03:42.626223433 +0000 UTC m=+0.200919891 container start 34029e9eebeef5dcca418750864c0d89e060ef0bf50cbd5808d0839b71147edc (image=quay.io/ceph/ceph:v18, name=festive_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:03:42 compute-0 podman[199855]: 2025-12-03 18:03:42.630966416 +0000 UTC m=+0.205662874 container attach 34029e9eebeef5dcca418750864c0d89e060ef0bf50cbd5808d0839b71147edc (image=quay.io/ceph/ceph:v18, name=festive_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:03:42 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  3 18:03:42 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  3 18:03:43 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 18:03:43 compute-0 festive_banzai[199901]: 
Dec  3 18:03:43 compute-0 festive_banzai[199901]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 18:03:43 compute-0 systemd[1]: libpod-34029e9eebeef5dcca418750864c0d89e060ef0bf50cbd5808d0839b71147edc.scope: Deactivated successfully.
Dec  3 18:03:43 compute-0 podman[199855]: 2025-12-03 18:03:43.267052625 +0000 UTC m=+0.841749083 container died 34029e9eebeef5dcca418750864c0d89e060ef0bf50cbd5808d0839b71147edc (image=quay.io/ceph/ceph:v18, name=festive_banzai, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:03:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c4f65995945c3cae13f48793002e3ce0e23d9fc312cc47358c2436996e745cf-merged.mount: Deactivated successfully.
Dec  3 18:03:43 compute-0 podman[199855]: 2025-12-03 18:03:43.318075696 +0000 UTC m=+0.892772154 container remove 34029e9eebeef5dcca418750864c0d89e060ef0bf50cbd5808d0839b71147edc (image=quay.io/ceph/ceph:v18, name=festive_banzai, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:43 compute-0 systemd[1]: libpod-conmon-34029e9eebeef5dcca418750864c0d89e060ef0bf50cbd5808d0839b71147edc.scope: Deactivated successfully.
Dec  3 18:03:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:43 compute-0 python3[200293]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:03:43 compute-0 podman[200341]: 2025-12-03 18:03:43.944628717 +0000 UTC m=+0.069829273 container create 09c7c64bf41f029f070568699275fbe2422148bb364279b363769e64e7d8ee1a (image=quay.io/ceph/ceph:v18, name=eloquent_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 18:03:43 compute-0 systemd[1]: Started libpod-conmon-09c7c64bf41f029f070568699275fbe2422148bb364279b363769e64e7d8ee1a.scope.
Dec  3 18:03:44 compute-0 podman[200341]: 2025-12-03 18:03:43.925245223 +0000 UTC m=+0.050445799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/532893ea9a8add66b630da0c006a7fd9a5f62a729f02a812c91bc96809e804dd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/532893ea9a8add66b630da0c006a7fd9a5f62a729f02a812c91bc96809e804dd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/532893ea9a8add66b630da0c006a7fd9a5f62a729f02a812c91bc96809e804dd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:44 compute-0 podman[200341]: 2025-12-03 18:03:44.065279196 +0000 UTC m=+0.190479772 container init 09c7c64bf41f029f070568699275fbe2422148bb364279b363769e64e7d8ee1a (image=quay.io/ceph/ceph:v18, name=eloquent_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:03:44 compute-0 podman[200341]: 2025-12-03 18:03:44.073209056 +0000 UTC m=+0.198409612 container start 09c7c64bf41f029f070568699275fbe2422148bb364279b363769e64e7d8ee1a (image=quay.io/ceph/ceph:v18, name=eloquent_mendel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:44 compute-0 podman[200341]: 2025-12-03 18:03:44.077808396 +0000 UTC m=+0.203008952 container attach 09c7c64bf41f029f070568699275fbe2422148bb364279b363769e64e7d8ee1a (image=quay.io/ceph/ceph:v18, name=eloquent_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 18:03:44 compute-0 ceph-mon[192802]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  3 18:03:44 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/c1caf3ba-b2a5-5005-a11e-e955c344dccc/config/ceph.client.admin.keyring
Dec  3 18:03:44 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/c1caf3ba-b2a5-5005-a11e-e955c344dccc/config/ceph.client.admin.keyring
Dec  3 18:03:44 compute-0 ansible-async_wrapper.py[198911]: Done in kid B.
Dec  3 18:03:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Dec  3 18:03:44 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2529330567' entity='client.admin' 
Dec  3 18:03:44 compute-0 systemd[1]: libpod-09c7c64bf41f029f070568699275fbe2422148bb364279b363769e64e7d8ee1a.scope: Deactivated successfully.
Dec  3 18:03:44 compute-0 podman[200341]: 2025-12-03 18:03:44.884287454 +0000 UTC m=+1.009488010 container died 09c7c64bf41f029f070568699275fbe2422148bb364279b363769e64e7d8ee1a (image=quay.io/ceph/ceph:v18, name=eloquent_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-532893ea9a8add66b630da0c006a7fd9a5f62a729f02a812c91bc96809e804dd-merged.mount: Deactivated successfully.
Dec  3 18:03:44 compute-0 podman[200341]: 2025-12-03 18:03:44.933298447 +0000 UTC m=+1.058498993 container remove 09c7c64bf41f029f070568699275fbe2422148bb364279b363769e64e7d8ee1a (image=quay.io/ceph/ceph:v18, name=eloquent_mendel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:44 compute-0 systemd[1]: libpod-conmon-09c7c64bf41f029f070568699275fbe2422148bb364279b363769e64e7d8ee1a.scope: Deactivated successfully.
Dec  3 18:03:45 compute-0 ceph-mon[192802]: Updating compute-0:/var/lib/ceph/c1caf3ba-b2a5-5005-a11e-e955c344dccc/config/ceph.client.admin.keyring
Dec  3 18:03:45 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/2529330567' entity='client.admin' 
Dec  3 18:03:45 compute-0 python3[200762]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:03:45 compute-0 podman[200809]: 2025-12-03 18:03:45.404932069 +0000 UTC m=+0.071879113 container create d373de41ae5b6127c15a92875f10bd22436b61d4ce9133313bb03a97d8044032 (image=quay.io/ceph/ceph:v18, name=agitated_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:03:45 compute-0 systemd[1]: Started libpod-conmon-d373de41ae5b6127c15a92875f10bd22436b61d4ce9133313bb03a97d8044032.scope.
Dec  3 18:03:45 compute-0 podman[200809]: 2025-12-03 18:03:45.381904817 +0000 UTC m=+0.048851881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb84da288db27c8deb2a3566ae24d7731f281146116d03341ae3a0265b33640a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb84da288db27c8deb2a3566ae24d7731f281146116d03341ae3a0265b33640a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb84da288db27c8deb2a3566ae24d7731f281146116d03341ae3a0265b33640a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:45 compute-0 podman[200809]: 2025-12-03 18:03:45.522961894 +0000 UTC m=+0.189908928 container init d373de41ae5b6127c15a92875f10bd22436b61d4ce9133313bb03a97d8044032 (image=quay.io/ceph/ceph:v18, name=agitated_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 18:03:45 compute-0 podman[200809]: 2025-12-03 18:03:45.533482875 +0000 UTC m=+0.200429899 container start d373de41ae5b6127c15a92875f10bd22436b61d4ce9133313bb03a97d8044032 (image=quay.io/ceph/ceph:v18, name=agitated_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 18:03:45 compute-0 podman[200809]: 2025-12-03 18:03:45.538975217 +0000 UTC m=+0.205922231 container attach d373de41ae5b6127c15a92875f10bd22436b61d4ce9133313bb03a97d8044032 (image=quay.io/ceph/ceph:v18, name=agitated_lalande, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:03:45 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:03:45 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:03:45 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:45 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev d5df6586-fd5d-4e96-b9f1-fec5cf744c0b (Updating crash deployment (+1 -> 1))
Dec  3 18:03:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Dec  3 18:03:45 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  3 18:03:45 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  3 18:03:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:03:45 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:03:45 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec  3 18:03:45 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec  3 18:03:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Dec  3 18:03:46 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2412565735' entity='client.admin' 
Dec  3 18:03:46 compute-0 systemd[1]: libpod-d373de41ae5b6127c15a92875f10bd22436b61d4ce9133313bb03a97d8044032.scope: Deactivated successfully.
Dec  3 18:03:46 compute-0 conmon[200858]: conmon d373de41ae5b6127c15a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d373de41ae5b6127c15a92875f10bd22436b61d4ce9133313bb03a97d8044032.scope/container/memory.events
Dec  3 18:03:46 compute-0 podman[200809]: 2025-12-03 18:03:46.160699002 +0000 UTC m=+0.827646016 container died d373de41ae5b6127c15a92875f10bd22436b61d4ce9133313bb03a97d8044032 (image=quay.io/ceph/ceph:v18, name=agitated_lalande, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb84da288db27c8deb2a3566ae24d7731f281146116d03341ae3a0265b33640a-merged.mount: Deactivated successfully.
Dec  3 18:03:46 compute-0 podman[200809]: 2025-12-03 18:03:46.219836818 +0000 UTC m=+0.886783832 container remove d373de41ae5b6127c15a92875f10bd22436b61d4ce9133313bb03a97d8044032 (image=quay.io/ceph/ceph:v18, name=agitated_lalande, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:46 compute-0 systemd[1]: libpod-conmon-d373de41ae5b6127c15a92875f10bd22436b61d4ce9133313bb03a97d8044032.scope: Deactivated successfully.
Dec  3 18:03:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:03:46 compute-0 python3[201136]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:03:46 compute-0 podman[201150]: 2025-12-03 18:03:46.626052313 +0000 UTC m=+0.060761996 container create 38d2de4461f466a7047235f998b2f752516a5378018c6bfaf5f43529e9f42ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhaskara, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:03:46 compute-0 systemd[1]: Started libpod-conmon-38d2de4461f466a7047235f998b2f752516a5378018c6bfaf5f43529e9f42ebe.scope.
Dec  3 18:03:46 compute-0 podman[201163]: 2025-12-03 18:03:46.69526073 +0000 UTC m=+0.071572464 container create 205c146bac61454b90850fa238ac96c2f7b18e8d90c65412f2423e2bdec7ff8a (image=quay.io/ceph/ceph:v18, name=youthful_goldstine, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:46 compute-0 podman[201150]: 2025-12-03 18:03:46.60463887 +0000 UTC m=+0.039348593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:03:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:46 compute-0 podman[201150]: 2025-12-03 18:03:46.732177064 +0000 UTC m=+0.166886777 container init 38d2de4461f466a7047235f998b2f752516a5378018c6bfaf5f43529e9f42ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhaskara, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:03:46 compute-0 podman[201150]: 2025-12-03 18:03:46.746390065 +0000 UTC m=+0.181099758 container start 38d2de4461f466a7047235f998b2f752516a5378018c6bfaf5f43529e9f42ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:46 compute-0 podman[201150]: 2025-12-03 18:03:46.75121418 +0000 UTC m=+0.185923893 container attach 38d2de4461f466a7047235f998b2f752516a5378018c6bfaf5f43529e9f42ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhaskara, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:03:46 compute-0 heuristic_bhaskara[201175]: 167 167
Dec  3 18:03:46 compute-0 systemd[1]: libpod-38d2de4461f466a7047235f998b2f752516a5378018c6bfaf5f43529e9f42ebe.scope: Deactivated successfully.
Dec  3 18:03:46 compute-0 podman[201150]: 2025-12-03 18:03:46.753037533 +0000 UTC m=+0.187747236 container died 38d2de4461f466a7047235f998b2f752516a5378018c6bfaf5f43529e9f42ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhaskara, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:03:46 compute-0 podman[201163]: 2025-12-03 18:03:46.659612777 +0000 UTC m=+0.035924521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:46 compute-0 systemd[1]: Started libpod-conmon-205c146bac61454b90850fa238ac96c2f7b18e8d90c65412f2423e2bdec7ff8a.scope.
Dec  3 18:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7054a2ddde2e884e9b6f52e1353b6b94f4dda46a687bf42df9e5f59e60953ab7-merged.mount: Deactivated successfully.
Dec  3 18:03:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:46 compute-0 podman[201150]: 2025-12-03 18:03:46.808098992 +0000 UTC m=+0.242808705 container remove 38d2de4461f466a7047235f998b2f752516a5378018c6bfaf5f43529e9f42ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbc651f06efe9986b0ad80fcf8008ea65242fc3a78642fcfd464048cfdb6027/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbc651f06efe9986b0ad80fcf8008ea65242fc3a78642fcfd464048cfdb6027/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbc651f06efe9986b0ad80fcf8008ea65242fc3a78642fcfd464048cfdb6027/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:46 compute-0 podman[201163]: 2025-12-03 18:03:46.832113737 +0000 UTC m=+0.208425501 container init 205c146bac61454b90850fa238ac96c2f7b18e8d90c65412f2423e2bdec7ff8a (image=quay.io/ceph/ceph:v18, name=youthful_goldstine, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:03:46 compute-0 systemd[1]: libpod-conmon-38d2de4461f466a7047235f998b2f752516a5378018c6bfaf5f43529e9f42ebe.scope: Deactivated successfully.
Dec  3 18:03:46 compute-0 podman[201163]: 2025-12-03 18:03:46.847015884 +0000 UTC m=+0.223327598 container start 205c146bac61454b90850fa238ac96c2f7b18e8d90c65412f2423e2bdec7ff8a (image=quay.io/ceph/ceph:v18, name=youthful_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:46 compute-0 podman[201163]: 2025-12-03 18:03:46.852601668 +0000 UTC m=+0.228913412 container attach 205c146bac61454b90850fa238ac96c2f7b18e8d90c65412f2423e2bdec7ff8a (image=quay.io/ceph/ceph:v18, name=youthful_goldstine, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  3 18:03:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  3 18:03:46 compute-0 ceph-mon[192802]: Deploying daemon crash.compute-0 on compute-0
Dec  3 18:03:46 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/2412565735' entity='client.admin' 
Dec  3 18:03:46 compute-0 systemd[1]: Reloading.
Dec  3 18:03:47 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:03:47 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:03:47 compute-0 systemd[1]: Reloading.
Dec  3 18:03:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Dec  3 18:03:47 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1490154087' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  3 18:03:47 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:03:47 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:03:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:47 compute-0 systemd[1]: Starting Ceph crash.compute-0 for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:03:47 compute-0 podman[201298]: 2025-12-03 18:03:47.890187238 +0000 UTC m=+0.092468424 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Dec  3 18:03:47 compute-0 podman[201300]: 2025-12-03 18:03:47.922204615 +0000 UTC m=+0.118016027 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4)
Dec  3 18:03:47 compute-0 podman[201297]: 2025-12-03 18:03:47.946052886 +0000 UTC m=+0.145163007 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  3 18:03:47 compute-0 podman[201299]: 2025-12-03 18:03:47.954084568 +0000 UTC m=+0.154944170 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:03:47 compute-0 podman[201302]: 2025-12-03 18:03:47.956195679 +0000 UTC m=+0.142482413 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.buildah.version=1.29.0, release-0.7.12=, vendor=Red Hat, Inc., container_name=kepler)
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 18:03:48 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1490154087' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1490154087' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec  3 18:03:48 compute-0 youthful_goldstine[201190]: set require_min_compat_client to mimic
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec  3 18:03:48 compute-0 systemd[1]: libpod-205c146bac61454b90850fa238ac96c2f7b18e8d90c65412f2423e2bdec7ff8a.scope: Deactivated successfully.
Dec  3 18:03:48 compute-0 podman[201163]: 2025-12-03 18:03:48.059788159 +0000 UTC m=+1.436099903 container died 205c146bac61454b90850fa238ac96c2f7b18e8d90c65412f2423e2bdec7ff8a (image=quay.io/ceph/ceph:v18, name=youthful_goldstine, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dbc651f06efe9986b0ad80fcf8008ea65242fc3a78642fcfd464048cfdb6027-merged.mount: Deactivated successfully.
Dec  3 18:03:48 compute-0 podman[201163]: 2025-12-03 18:03:48.125577204 +0000 UTC m=+1.501888918 container remove 205c146bac61454b90850fa238ac96c2f7b18e8d90c65412f2423e2bdec7ff8a (image=quay.io/ceph/ceph:v18, name=youthful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 18:03:48 compute-0 systemd[1]: libpod-conmon-205c146bac61454b90850fa238ac96c2f7b18e8d90c65412f2423e2bdec7ff8a.scope: Deactivated successfully.
Dec  3 18:03:48 compute-0 podman[201438]: 2025-12-03 18:03:48.144202059 +0000 UTC m=+0.077357002 container create 9a4822c45260ccca819b757d64e3eb6d4e4470cb568b30c579b2888988f6564b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6936e6f389584e745751a221f502ec693885281fda3b6e01fb7544504e387c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6936e6f389584e745751a221f502ec693885281fda3b6e01fb7544504e387c/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6936e6f389584e745751a221f502ec693885281fda3b6e01fb7544504e387c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6936e6f389584e745751a221f502ec693885281fda3b6e01fb7544504e387c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:48 compute-0 podman[201438]: 2025-12-03 18:03:48.115022531 +0000 UTC m=+0.048177554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:03:48 compute-0 podman[201438]: 2025-12-03 18:03:48.231301774 +0000 UTC m=+0.164456747 container init 9a4822c45260ccca819b757d64e3eb6d4e4470cb568b30c579b2888988f6564b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:03:48 compute-0 podman[201438]: 2025-12-03 18:03:48.254677184 +0000 UTC m=+0.187832127 container start 9a4822c45260ccca819b757d64e3eb6d4e4470cb568b30c579b2888988f6564b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:03:48 compute-0 bash[201438]: 9a4822c45260ccca819b757d64e3eb6d4e4470cb568b30c579b2888988f6564b
Dec  3 18:03:48 compute-0 systemd[1]: Started Ceph crash.compute-0 for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:48 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev d5df6586-fd5d-4e96-b9f1-fec5cf744c0b (Updating crash deployment (+1 -> 1))
Dec  3 18:03:48 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event d5df6586-fd5d-4e96-b9f1-fec5cf744c0b (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:48 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 946a35ff-d036-4f5a-b167-3cb0fcf70be3 does not exist
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:48 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev c1548785-793b-403d-9242-729b73f22af9 (Updating mgr deployment (+1 -> 2))
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.kvdphx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kvdphx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kvdphx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:03:48 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.kvdphx on compute-0
Dec  3 18:03:48 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.kvdphx on compute-0
Dec  3 18:03:48 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0[201465]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  3 18:03:48 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0[201465]: 2025-12-03T18:03:48.665+0000 7fbca16a7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  3 18:03:48 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0[201465]: 2025-12-03T18:03:48.665+0000 7fbca16a7640 -1 AuthRegistry(0x7fbc9c067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  3 18:03:48 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0[201465]: 2025-12-03T18:03:48.669+0000 7fbca16a7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  3 18:03:48 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0[201465]: 2025-12-03T18:03:48.670+0000 7fbca16a7640 -1 AuthRegistry(0x7fbca16a6000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  3 18:03:48 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0[201465]: 2025-12-03T18:03:48.673+0000 7fbc9affd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  3 18:03:48 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0[201465]: 2025-12-03T18:03:48.673+0000 7fbca16a7640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec  3 18:03:48 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0[201465]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  3 18:03:48 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-crash-compute-0[201465]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  3 18:03:48 compute-0 podman[201570]: 2025-12-03 18:03:48.810947672 +0000 UTC m=+0.121422067 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:03:48 compute-0 python3[201582]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:03:48 compute-0 ceph-mgr[193091]: [progress INFO root] Writing back 1 completed events
Dec  3 18:03:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 18:03:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:48 compute-0 podman[201631]: 2025-12-03 18:03:48.895152078 +0000 UTC m=+0.061375140 container create 844c7a2c71ff13cf3a875dccc18dcb5836873e0e2304cba53d52b965bfb1d5d9 (image=quay.io/ceph/ceph:v18, name=modest_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:03:48 compute-0 systemd[1]: Started libpod-conmon-844c7a2c71ff13cf3a875dccc18dcb5836873e0e2304cba53d52b965bfb1d5d9.scope.
Dec  3 18:03:48 compute-0 podman[201631]: 2025-12-03 18:03:48.870388746 +0000 UTC m=+0.036611818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b4911a36d4f11256c243ba489fd12639248cb33c348ec27d24e67107a3c1e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b4911a36d4f11256c243ba489fd12639248cb33c348ec27d24e67107a3c1e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b4911a36d4f11256c243ba489fd12639248cb33c348ec27d24e67107a3c1e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:48 compute-0 podman[201631]: 2025-12-03 18:03:48.999153978 +0000 UTC m=+0.165377060 container init 844c7a2c71ff13cf3a875dccc18dcb5836873e0e2304cba53d52b965bfb1d5d9 (image=quay.io/ceph/ceph:v18, name=modest_stonebraker, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:49 compute-0 podman[201631]: 2025-12-03 18:03:49.012484937 +0000 UTC m=+0.178707999 container start 844c7a2c71ff13cf3a875dccc18dcb5836873e0e2304cba53d52b965bfb1d5d9 (image=quay.io/ceph/ceph:v18, name=modest_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:49 compute-0 podman[201631]: 2025-12-03 18:03:49.017375305 +0000 UTC m=+0.183598357 container attach 844c7a2c71ff13cf3a875dccc18dcb5836873e0e2304cba53d52b965bfb1d5d9 (image=quay.io/ceph/ceph:v18, name=modest_stonebraker, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:49 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1490154087' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  3 18:03:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kvdphx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  3 18:03:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kvdphx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  3 18:03:49 compute-0 ceph-mon[192802]: Deploying daemon mgr.compute-0.kvdphx on compute-0
Dec  3 18:03:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:49 compute-0 podman[201685]: 2025-12-03 18:03:49.176219547 +0000 UTC m=+0.053669666 container create f1cab38d28f966da90c8454d941495019cbf2cfe23e1e29b97c010a9782ef0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williams, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:49 compute-0 systemd[1]: Started libpod-conmon-f1cab38d28f966da90c8454d941495019cbf2cfe23e1e29b97c010a9782ef0f3.scope.
Dec  3 18:03:49 compute-0 podman[201685]: 2025-12-03 18:03:49.154082827 +0000 UTC m=+0.031532976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:03:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:49 compute-0 podman[201685]: 2025-12-03 18:03:49.2878392 +0000 UTC m=+0.165289349 container init f1cab38d28f966da90c8454d941495019cbf2cfe23e1e29b97c010a9782ef0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williams, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:03:49 compute-0 podman[201685]: 2025-12-03 18:03:49.294862708 +0000 UTC m=+0.172312827 container start f1cab38d28f966da90c8454d941495019cbf2cfe23e1e29b97c010a9782ef0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 18:03:49 compute-0 podman[201685]: 2025-12-03 18:03:49.299846206 +0000 UTC m=+0.177296325 container attach f1cab38d28f966da90c8454d941495019cbf2cfe23e1e29b97c010a9782ef0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:03:49 compute-0 zealous_williams[201701]: 167 167
Dec  3 18:03:49 compute-0 systemd[1]: libpod-f1cab38d28f966da90c8454d941495019cbf2cfe23e1e29b97c010a9782ef0f3.scope: Deactivated successfully.
Dec  3 18:03:49 compute-0 conmon[201701]: conmon f1cab38d28f966da90c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1cab38d28f966da90c8454d941495019cbf2cfe23e1e29b97c010a9782ef0f3.scope/container/memory.events
Dec  3 18:03:49 compute-0 podman[201685]: 2025-12-03 18:03:49.303703289 +0000 UTC m=+0.181153408 container died f1cab38d28f966da90c8454d941495019cbf2cfe23e1e29b97c010a9782ef0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williams, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 18:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bf93908f9492cbf469fb1d5bee93e6a6b88836776b11fd3e1222cc21e619ce6-merged.mount: Deactivated successfully.
Dec  3 18:03:49 compute-0 podman[201685]: 2025-12-03 18:03:49.360006097 +0000 UTC m=+0.237456236 container remove f1cab38d28f966da90c8454d941495019cbf2cfe23e1e29b97c010a9782ef0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:03:49 compute-0 systemd[1]: libpod-conmon-f1cab38d28f966da90c8454d941495019cbf2cfe23e1e29b97c010a9782ef0f3.scope: Deactivated successfully.
Dec  3 18:03:49 compute-0 systemd[1]: Reloading.
Dec  3 18:03:49 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:03:49 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:03:49 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:03:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:49 compute-0 systemd[1]: Reloading.
Dec  3 18:03:50 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:03:50 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:03:50 compute-0 systemd[1]: Starting Ceph mgr.compute-0.kvdphx for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:03:50 compute-0 podman[201964]: 2025-12-03 18:03:50.627022701 +0000 UTC m=+0.080766484 container create a1776c69354ab8151fa481c971f46175675865434073eac1c2e6ce8243e71b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-kvdphx, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:50 compute-0 podman[201964]: 2025-12-03 18:03:50.59190052 +0000 UTC m=+0.045644383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6153b4941b61625517cc5aedf51c5948ec858dfd96a2cd0b5c106d02a4ce0ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6153b4941b61625517cc5aedf51c5948ec858dfd96a2cd0b5c106d02a4ce0ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6153b4941b61625517cc5aedf51c5948ec858dfd96a2cd0b5c106d02a4ce0ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6153b4941b61625517cc5aedf51c5948ec858dfd96a2cd0b5c106d02a4ce0ac/merged/var/lib/ceph/mgr/ceph-compute-0.kvdphx supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:50 compute-0 podman[201964]: 2025-12-03 18:03:50.731869051 +0000 UTC m=+0.185612854 container init a1776c69354ab8151fa481c971f46175675865434073eac1c2e6ce8243e71b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-kvdphx, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 18:03:50 compute-0 podman[201964]: 2025-12-03 18:03:50.745988299 +0000 UTC m=+0.199732072 container start a1776c69354ab8151fa481c971f46175675865434073eac1c2e6ce8243e71b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-kvdphx, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 18:03:50 compute-0 bash[201964]: a1776c69354ab8151fa481c971f46175675865434073eac1c2e6ce8243e71b0c
Dec  3 18:03:50 compute-0 systemd[1]: Started Ceph mgr.compute-0.kvdphx for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:03:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 18:03:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 18:03:50 compute-0 ceph-mgr[202000]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 18:03:50 compute-0 ceph-mgr[202000]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  3 18:03:50 compute-0 ceph-mgr[202000]: pidfile_write: ignore empty --pid-file
Dec  3 18:03:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:03:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 18:03:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:03:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 18:03:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 18:03:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: [cephadm INFO root] Added host compute-0
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec  3 18:03:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  3 18:03:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev c1548785-793b-403d-9242-729b73f22af9 (Updating mgr deployment (+1 -> 2))
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event c1548785-793b-403d-9242-729b73f22af9 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Dec  3 18:03:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 18:03:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec  3 18:03:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 18:03:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec  3 18:03:50 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec  3 18:03:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Dec  3 18:03:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:50 compute-0 modest_stonebraker[201658]: Added host 'compute-0' with addr '192.168.122.100'
Dec  3 18:03:50 compute-0 modest_stonebraker[201658]: Scheduled mon update...
Dec  3 18:03:50 compute-0 modest_stonebraker[201658]: Scheduled mgr update...
Dec  3 18:03:50 compute-0 modest_stonebraker[201658]: Scheduled osd.default_drive_group update...
Dec  3 18:03:50 compute-0 systemd[1]: libpod-844c7a2c71ff13cf3a875dccc18dcb5836873e0e2304cba53d52b965bfb1d5d9.scope: Deactivated successfully.
Dec  3 18:03:50 compute-0 podman[201631]: 2025-12-03 18:03:50.930683871 +0000 UTC m=+2.096906923 container died 844c7a2c71ff13cf3a875dccc18dcb5836873e0e2304cba53d52b965bfb1d5d9 (image=quay.io/ceph/ceph:v18, name=modest_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca3b4911a36d4f11256c243ba489fd12639248cb33c348ec27d24e67107a3c1e-merged.mount: Deactivated successfully.
Dec  3 18:03:50 compute-0 ceph-mgr[202000]: mgr[py] Loading python module 'alerts'
Dec  3 18:03:51 compute-0 podman[201631]: 2025-12-03 18:03:51.00287332 +0000 UTC m=+2.169096372 container remove 844c7a2c71ff13cf3a875dccc18dcb5836873e0e2304cba53d52b965bfb1d5d9 (image=quay.io/ceph/ceph:v18, name=modest_stonebraker, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:51 compute-0 systemd[1]: libpod-conmon-844c7a2c71ff13cf3a875dccc18dcb5836873e0e2304cba53d52b965bfb1d5d9.scope: Deactivated successfully.
Dec  3 18:03:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:03:51 compute-0 ceph-mgr[202000]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 18:03:51 compute-0 ceph-mgr[202000]: mgr[py] Loading python module 'balancer'
Dec  3 18:03:51 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-kvdphx[201988]: 2025-12-03T18:03:51.302+0000 7f8b69d51140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 18:03:51 compute-0 ceph-mgr[202000]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 18:03:51 compute-0 ceph-mgr[202000]: mgr[py] Loading python module 'cephadm'
Dec  3 18:03:51 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-kvdphx[201988]: 2025-12-03T18:03:51.562+0000 7f8b69d51140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 18:03:51 compute-0 python3[202190]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:03:51 compute-0 podman[202217]: 2025-12-03 18:03:51.684357044 +0000 UTC m=+0.092732350 container create b494d4128d1d2f708d403021b3f2f036051c6b492fa736cf7b2c2a2930773d7e (image=quay.io/ceph/ceph:v18, name=sad_hermann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:03:51 compute-0 systemd[1]: Started libpod-conmon-b494d4128d1d2f708d403021b3f2f036051c6b492fa736cf7b2c2a2930773d7e.scope.
Dec  3 18:03:51 compute-0 podman[202217]: 2025-12-03 18:03:51.650704899 +0000 UTC m=+0.059080195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:03:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506fe47ea107babe288bdd8c5f198360d9456c43a0f9445e9c2c54825977e371/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506fe47ea107babe288bdd8c5f198360d9456c43a0f9445e9c2c54825977e371/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506fe47ea107babe288bdd8c5f198360d9456c43a0f9445e9c2c54825977e371/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:51 compute-0 podman[202217]: 2025-12-03 18:03:51.816583261 +0000 UTC m=+0.224958547 container init b494d4128d1d2f708d403021b3f2f036051c6b492fa736cf7b2c2a2930773d7e (image=quay.io/ceph/ceph:v18, name=sad_hermann, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:03:51 compute-0 podman[202217]: 2025-12-03 18:03:51.839627722 +0000 UTC m=+0.248002988 container start b494d4128d1d2f708d403021b3f2f036051c6b492fa736cf7b2c2a2930773d7e (image=quay.io/ceph/ceph:v18, name=sad_hermann, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 18:03:51 compute-0 podman[202217]: 2025-12-03 18:03:51.844600401 +0000 UTC m=+0.252975667 container attach b494d4128d1d2f708d403021b3f2f036051c6b492fa736cf7b2c2a2930773d7e (image=quay.io/ceph/ceph:v18, name=sad_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:03:52 compute-0 ceph-mon[192802]: Added host compute-0
Dec  3 18:03:52 compute-0 ceph-mon[192802]: Saving service mon spec with placement compute-0
Dec  3 18:03:52 compute-0 ceph-mon[192802]: Saving service mgr spec with placement compute-0
Dec  3 18:03:52 compute-0 ceph-mon[192802]: Marking host: compute-0 for OSDSpec preview refresh.
Dec  3 18:03:52 compute-0 ceph-mon[192802]: Saving service osd.default_drive_group spec with placement compute-0
Dec  3 18:03:52 compute-0 podman[202301]: 2025-12-03 18:03:52.281184244 +0000 UTC m=+0.092828374 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:03:52 compute-0 podman[202301]: 2025-12-03 18:03:52.414149797 +0000 UTC m=+0.225793947 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:03:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  3 18:03:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571865891' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  3 18:03:52 compute-0 sad_hermann[202250]: 
Dec  3 18:03:52 compute-0 sad_hermann[202250]: {"fsid":"c1caf3ba-b2a5-5005-a11e-e955c344dccc","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":86,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-03T18:02:22.594284+0000","services":{}},"progress_events":{}}
Dec  3 18:03:52 compute-0 systemd[1]: libpod-b494d4128d1d2f708d403021b3f2f036051c6b492fa736cf7b2c2a2930773d7e.scope: Deactivated successfully.
Dec  3 18:03:52 compute-0 podman[202217]: 2025-12-03 18:03:52.524687204 +0000 UTC m=+0.933062500 container died b494d4128d1d2f708d403021b3f2f036051c6b492fa736cf7b2c2a2930773d7e (image=quay.io/ceph/ceph:v18, name=sad_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-506fe47ea107babe288bdd8c5f198360d9456c43a0f9445e9c2c54825977e371-merged.mount: Deactivated successfully.
Dec  3 18:03:52 compute-0 podman[202217]: 2025-12-03 18:03:52.634623395 +0000 UTC m=+1.042998651 container remove b494d4128d1d2f708d403021b3f2f036051c6b492fa736cf7b2c2a2930773d7e (image=quay.io/ceph/ceph:v18, name=sad_hermann, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:03:52 compute-0 systemd[1]: libpod-conmon-b494d4128d1d2f708d403021b3f2f036051c6b492fa736cf7b2c2a2930773d7e.scope: Deactivated successfully.
Dec  3 18:03:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:03:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:03:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:03:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:03:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:03:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:03:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:03:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:03:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:03:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:52 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 56ac4a41-92a5-40a4-b080-0b73da802985 does not exist
Dec  3 18:03:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  3 18:03:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:52 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev 32791f1d-cbc4-4181-97e4-4e3e0b04ec5a (Updating mgr deployment (-1 -> 1))
Dec  3 18:03:52 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.kvdphx from compute-0 -- ports [8765]
Dec  3 18:03:52 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.kvdphx from compute-0 -- ports [8765]
Dec  3 18:03:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:03:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:53 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.kvdphx for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:03:53 compute-0 ceph-mgr[202000]: mgr[py] Loading python module 'crash'
Dec  3 18:03:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:53 compute-0 podman[202591]: 2025-12-03 18:03:53.809951204 +0000 UTC m=+0.083765696 container died a1776c69354ab8151fa481c971f46175675865434073eac1c2e6ce8243e71b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-kvdphx, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 18:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6153b4941b61625517cc5aedf51c5948ec858dfd96a2cd0b5c106d02a4ce0ac-merged.mount: Deactivated successfully.
Dec  3 18:03:53 compute-0 podman[202591]: 2025-12-03 18:03:53.864815547 +0000 UTC m=+0.138630019 container remove a1776c69354ab8151fa481c971f46175675865434073eac1c2e6ce8243e71b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-kvdphx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:53 compute-0 bash[202591]: ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-kvdphx
Dec  3 18:03:53 compute-0 systemd[1]: ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc@mgr.compute-0.kvdphx.service: Main process exited, code=exited, status=143/n/a
Dec  3 18:03:53 compute-0 ceph-mgr[193091]: [progress INFO root] Writing back 2 completed events
Dec  3 18:03:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 18:03:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:54 compute-0 systemd[1]: ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc@mgr.compute-0.kvdphx.service: Failed with result 'exit-code'.
Dec  3 18:03:54 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.kvdphx for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:03:54 compute-0 systemd[1]: ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc@mgr.compute-0.kvdphx.service: Consumed 4.313s CPU time.
Dec  3 18:03:54 compute-0 ceph-mon[192802]: Removing daemon mgr.compute-0.kvdphx from compute-0 -- ports [8765]
Dec  3 18:03:54 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:54 compute-0 systemd[1]: Reloading.
Dec  3 18:03:54 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:03:54 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:03:54 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.kvdphx
Dec  3 18:03:54 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.kvdphx
Dec  3 18:03:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.kvdphx"} v 0) v1
Dec  3 18:03:54 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.kvdphx"}]: dispatch
Dec  3 18:03:54 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.kvdphx"}]': finished
Dec  3 18:03:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 18:03:54 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:54 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev 32791f1d-cbc4-4181-97e4-4e3e0b04ec5a (Updating mgr deployment (-1 -> 1))
Dec  3 18:03:54 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event 32791f1d-cbc4-4181-97e4-4e3e0b04ec5a (Updating mgr deployment (-1 -> 1)) in 2 seconds
Dec  3 18:03:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 18:03:54 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:54 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3c2188e8-54f8-42b6-b99f-66ad355660b9 does not exist
Dec  3 18:03:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:03:54 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:03:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:03:54 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:03:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:03:54 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:03:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.kvdphx"}]: dispatch
Dec  3 18:03:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.kvdphx"}]': finished
Dec  3 18:03:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:03:55 compute-0 podman[202824]: 2025-12-03 18:03:55.459800583 +0000 UTC m=+0.057995839 container create f9961b91db24d1b9cab546a741feb3df2ef37dd9dd060292e724de9db6067e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:55 compute-0 systemd[1]: Started libpod-conmon-f9961b91db24d1b9cab546a741feb3df2ef37dd9dd060292e724de9db6067e1f.scope.
Dec  3 18:03:55 compute-0 podman[202824]: 2025-12-03 18:03:55.435406439 +0000 UTC m=+0.033601745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:03:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:55 compute-0 podman[202824]: 2025-12-03 18:03:55.565497943 +0000 UTC m=+0.163693219 container init f9961b91db24d1b9cab546a741feb3df2ef37dd9dd060292e724de9db6067e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:55 compute-0 podman[202824]: 2025-12-03 18:03:55.58166003 +0000 UTC m=+0.179855296 container start f9961b91db24d1b9cab546a741feb3df2ef37dd9dd060292e724de9db6067e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:03:55 compute-0 podman[202824]: 2025-12-03 18:03:55.588104545 +0000 UTC m=+0.186299801 container attach f9961b91db24d1b9cab546a741feb3df2ef37dd9dd060292e724de9db6067e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:55 compute-0 sweet_burnell[202839]: 167 167
Dec  3 18:03:55 compute-0 systemd[1]: libpod-f9961b91db24d1b9cab546a741feb3df2ef37dd9dd060292e724de9db6067e1f.scope: Deactivated successfully.
Dec  3 18:03:55 compute-0 podman[202824]: 2025-12-03 18:03:55.592899699 +0000 UTC m=+0.191094995 container died f9961b91db24d1b9cab546a741feb3df2ef37dd9dd060292e724de9db6067e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c14993c5748e270039942266d5d0ad8224cee98d2b8f88716ae8c8573d000326-merged.mount: Deactivated successfully.
Dec  3 18:03:55 compute-0 podman[202824]: 2025-12-03 18:03:55.667847704 +0000 UTC m=+0.266042990 container remove f9961b91db24d1b9cab546a741feb3df2ef37dd9dd060292e724de9db6067e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:03:55 compute-0 systemd[1]: libpod-conmon-f9961b91db24d1b9cab546a741feb3df2ef37dd9dd060292e724de9db6067e1f.scope: Deactivated successfully.
Dec  3 18:03:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:55 compute-0 podman[202863]: 2025-12-03 18:03:55.923629687 +0000 UTC m=+0.069784902 container create 3797724428c0004c2212032ff22fe55c5ab425085a1966557a87bf24bd86924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:03:55 compute-0 systemd[1]: Started libpod-conmon-3797724428c0004c2212032ff22fe55c5ab425085a1966557a87bf24bd86924e.scope.
Dec  3 18:03:55 compute-0 podman[202863]: 2025-12-03 18:03:55.891232062 +0000 UTC m=+0.037387347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:03:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b6dda95217e9234def5a21e172e484ea902eef73de632d8cef72e2fee33dec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b6dda95217e9234def5a21e172e484ea902eef73de632d8cef72e2fee33dec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b6dda95217e9234def5a21e172e484ea902eef73de632d8cef72e2fee33dec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b6dda95217e9234def5a21e172e484ea902eef73de632d8cef72e2fee33dec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b6dda95217e9234def5a21e172e484ea902eef73de632d8cef72e2fee33dec4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:03:56 compute-0 podman[202863]: 2025-12-03 18:03:56.03029161 +0000 UTC m=+0.176446855 container init 3797724428c0004c2212032ff22fe55c5ab425085a1966557a87bf24bd86924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:03:56 compute-0 podman[202863]: 2025-12-03 18:03:56.044288966 +0000 UTC m=+0.190444171 container start 3797724428c0004c2212032ff22fe55c5ab425085a1966557a87bf24bd86924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:03:56 compute-0 podman[202863]: 2025-12-03 18:03:56.048958588 +0000 UTC m=+0.195113873 container attach 3797724428c0004c2212032ff22fe55c5ab425085a1966557a87bf24bd86924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 18:03:56 compute-0 ceph-mon[192802]: Removing key for mgr.compute-0.kvdphx
Dec  3 18:03:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:03:57 compute-0 confident_lamport[202879]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:03:57 compute-0 confident_lamport[202879]: --> relative data size: 1.0
Dec  3 18:03:57 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 18:03:57 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 973fbbc8-5aff-4a53-bee8-42e5a6788dd6
Dec  3 18:03:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6"} v 0) v1
Dec  3 18:03:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2554778254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6"}]: dispatch
Dec  3 18:03:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec  3 18:03:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 18:03:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2554778254' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6"}]': finished
Dec  3 18:03:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec  3 18:03:57 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec  3 18:03:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:03:57 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:03:57 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:03:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:57 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 18:03:57 compute-0 confident_lamport[202879]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec  3 18:03:57 compute-0 lvm[202941]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 18:03:57 compute-0 lvm[202941]: VG ceph_vg0 finished
Dec  3 18:03:57 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  3 18:03:57 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 18:03:57 compute-0 confident_lamport[202879]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 18:03:57 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec  3 18:03:58 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/2554778254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6"}]: dispatch
Dec  3 18:03:58 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/2554778254' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6"}]': finished
Dec  3 18:03:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  3 18:03:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540132778' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  3 18:03:58 compute-0 confident_lamport[202879]: stderr: got monmap epoch 1
Dec  3 18:03:58 compute-0 confident_lamport[202879]: --> Creating keyring file for osd.0
Dec  3 18:03:58 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec  3 18:03:58 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec  3 18:03:58 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 973fbbc8-5aff-4a53-bee8-42e5a6788dd6 --setuser ceph --setgroup ceph
Dec  3 18:03:58 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  3 18:03:58 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  3 18:03:58 compute-0 ceph-mgr[193091]: [progress INFO root] Writing back 3 completed events
Dec  3 18:03:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 18:03:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:59 compute-0 ceph-mon[192802]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  3 18:03:59 compute-0 ceph-mon[192802]: Cluster is now healthy
Dec  3 18:03:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:03:59 compute-0 podman[158200]: time="2025-12-03T18:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:03:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25442 "" "Go-http-client/1.1"
Dec  3 18:03:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:03:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4839 "" "Go-http-client/1.1"
Dec  3 18:04:00 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:03:58.480+0000 7f1bfde1f740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 18:04:00 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:03:58.481+0000 7f1bfde1f740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 18:04:00 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:03:58.481+0000 7f1bfde1f740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 18:04:00 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:03:58.482+0000 7f1bfde1f740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec  3 18:04:00 compute-0 confident_lamport[202879]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 18:04:01 compute-0 confident_lamport[202879]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  3 18:04:01 compute-0 confident_lamport[202879]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1e2b0083-5293-47cb-a3d1-bc27cedc4ede
Dec  3 18:04:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:01 compute-0 openstack_network_exporter[160319]: ERROR   18:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:04:01 compute-0 openstack_network_exporter[160319]: ERROR   18:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:04:01 compute-0 openstack_network_exporter[160319]: ERROR   18:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:04:01 compute-0 openstack_network_exporter[160319]: ERROR   18:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:04:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:04:01 compute-0 openstack_network_exporter[160319]: ERROR   18:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:04:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:04:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede"} v 0) v1
Dec  3 18:04:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3541784381' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede"}]: dispatch
Dec  3 18:04:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec  3 18:04:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 18:04:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3541784381' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede"}]': finished
Dec  3 18:04:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec  3 18:04:01 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec  3 18:04:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:01 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:01 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 18:04:01 compute-0 lvm[203904]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 18:04:01 compute-0 lvm[203904]: VG ceph_vg1 finished
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:01 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec  3 18:04:02 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3541784381' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede"}]: dispatch
Dec  3 18:04:02 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3541784381' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede"}]': finished
Dec  3 18:04:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  3 18:04:02 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2254269707' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  3 18:04:02 compute-0 confident_lamport[202879]: stderr: got monmap epoch 1
Dec  3 18:04:02 compute-0 confident_lamport[202879]: --> Creating keyring file for osd.1
Dec  3 18:04:02 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec  3 18:04:02 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec  3 18:04:02 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 1e2b0083-5293-47cb-a3d1-bc27cedc4ede --setuser ceph --setgroup ceph
Dec  3 18:04:02 compute-0 podman[203960]: 2025-12-03 18:04:02.751385062 +0000 UTC m=+0.123691203 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:04:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:05 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:04:02.568+0000 7efe607d4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 18:04:05 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:04:02.568+0000 7efe607d4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 18:04:05 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:04:02.568+0000 7efe607d4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 18:04:05 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:04:02.568+0000 7efe607d4740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec  3 18:04:05 compute-0 confident_lamport[202879]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec  3 18:04:05 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 18:04:05 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  3 18:04:05 compute-0 confident_lamport[202879]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:05 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:05 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 18:04:05 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 18:04:05 compute-0 confident_lamport[202879]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  3 18:04:05 compute-0 confident_lamport[202879]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec  3 18:04:05 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 18:04:05 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2abec9de-afba-437e-9a17-384a1dd8cd50
Dec  3 18:04:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50"} v 0) v1
Dec  3 18:04:05 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3962564461' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50"}]: dispatch
Dec  3 18:04:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec  3 18:04:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 18:04:05 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3962564461' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50"}]': finished
Dec  3 18:04:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec  3 18:04:05 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec  3 18:04:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:05 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:05 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:05 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:05 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:05 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:05 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:05 compute-0 lvm[204897]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 18:04:05 compute-0 lvm[204897]: VG ceph_vg2 finished
Dec  3 18:04:05 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 18:04:06 compute-0 confident_lamport[202879]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec  3 18:04:06 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec  3 18:04:06 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 18:04:06 compute-0 confident_lamport[202879]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:06 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec  3 18:04:06 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3962564461' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50"}]: dispatch
Dec  3 18:04:06 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3962564461' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50"}]': finished
Dec  3 18:04:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  3 18:04:06 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246948958' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  3 18:04:06 compute-0 confident_lamport[202879]: stderr: got monmap epoch 1
Dec  3 18:04:06 compute-0 confident_lamport[202879]: --> Creating keyring file for osd.2
Dec  3 18:04:06 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec  3 18:04:06 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec  3 18:04:06 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 2abec9de-afba-437e-9a17-384a1dd8cd50 --setuser ceph --setgroup ceph
Dec  3 18:04:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:09 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:04:06.646+0000 7fd35ca46740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 18:04:09 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:04:06.647+0000 7fd35ca46740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 18:04:09 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:04:06.647+0000 7fd35ca46740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 18:04:09 compute-0 confident_lamport[202879]: stderr: 2025-12-03T18:04:06.648+0000 7fd35ca46740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec  3 18:04:09 compute-0 confident_lamport[202879]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec  3 18:04:09 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 18:04:09 compute-0 confident_lamport[202879]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec  3 18:04:09 compute-0 confident_lamport[202879]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:09 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:09 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 18:04:09 compute-0 confident_lamport[202879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 18:04:09 compute-0 confident_lamport[202879]: --> ceph-volume lvm activate successful for osd ID: 2
Dec  3 18:04:09 compute-0 confident_lamport[202879]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec  3 18:04:09 compute-0 systemd[1]: libpod-3797724428c0004c2212032ff22fe55c5ab425085a1966557a87bf24bd86924e.scope: Deactivated successfully.
Dec  3 18:04:09 compute-0 systemd[1]: libpod-3797724428c0004c2212032ff22fe55c5ab425085a1966557a87bf24bd86924e.scope: Consumed 7.716s CPU time.
Dec  3 18:04:09 compute-0 podman[205832]: 2025-12-03 18:04:09.499694404 +0000 UTC m=+0.052491568 container died 3797724428c0004c2212032ff22fe55c5ab425085a1966557a87bf24bd86924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b6dda95217e9234def5a21e172e484ea902eef73de632d8cef72e2fee33dec4-merged.mount: Deactivated successfully.
Dec  3 18:04:09 compute-0 podman[205832]: 2025-12-03 18:04:09.608853347 +0000 UTC m=+0.161650481 container remove 3797724428c0004c2212032ff22fe55c5ab425085a1966557a87bf24bd86924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:09 compute-0 systemd[1]: libpod-conmon-3797724428c0004c2212032ff22fe55c5ab425085a1966557a87bf24bd86924e.scope: Deactivated successfully.
Dec  3 18:04:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:10 compute-0 podman[205983]: 2025-12-03 18:04:10.559501837 +0000 UTC m=+0.050912220 container create 350b61ad09803cf9ebc6718e22b00579c642650d8c8583e7f04a97723d631ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:10 compute-0 systemd[1]: Started libpod-conmon-350b61ad09803cf9ebc6718e22b00579c642650d8c8583e7f04a97723d631ff3.scope.
Dec  3 18:04:10 compute-0 podman[205983]: 2025-12-03 18:04:10.539752134 +0000 UTC m=+0.031162587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:10 compute-0 podman[205983]: 2025-12-03 18:04:10.676889937 +0000 UTC m=+0.168300340 container init 350b61ad09803cf9ebc6718e22b00579c642650d8c8583e7f04a97723d631ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:04:10 compute-0 podman[205983]: 2025-12-03 18:04:10.689494629 +0000 UTC m=+0.180905022 container start 350b61ad09803cf9ebc6718e22b00579c642650d8c8583e7f04a97723d631ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:04:10 compute-0 cranky_roentgen[205999]: 167 167
Dec  3 18:04:10 compute-0 systemd[1]: libpod-350b61ad09803cf9ebc6718e22b00579c642650d8c8583e7f04a97723d631ff3.scope: Deactivated successfully.
Dec  3 18:04:10 compute-0 podman[205983]: 2025-12-03 18:04:10.72421174 +0000 UTC m=+0.215622173 container attach 350b61ad09803cf9ebc6718e22b00579c642650d8c8583e7f04a97723d631ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:10 compute-0 podman[205983]: 2025-12-03 18:04:10.725225145 +0000 UTC m=+0.216635538 container died 350b61ad09803cf9ebc6718e22b00579c642650d8c8583e7f04a97723d631ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b71b92ea972758ebb9b4cc5268276481b622367c0ce524ac73e0e02deec922f-merged.mount: Deactivated successfully.
Dec  3 18:04:10 compute-0 podman[205983]: 2025-12-03 18:04:10.78517089 +0000 UTC m=+0.276581283 container remove 350b61ad09803cf9ebc6718e22b00579c642650d8c8583e7f04a97723d631ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_roentgen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:10 compute-0 systemd[1]: libpod-conmon-350b61ad09803cf9ebc6718e22b00579c642650d8c8583e7f04a97723d631ff3.scope: Deactivated successfully.
Dec  3 18:04:11 compute-0 podman[206024]: 2025-12-03 18:04:11.023782172 +0000 UTC m=+0.076067231 container create 2c39400aa4c72179c88a29ca6402685468fd007d4c540bc7af696d0140cfe1cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gould, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:04:11 compute-0 systemd[1]: Started libpod-conmon-2c39400aa4c72179c88a29ca6402685468fd007d4c540bc7af696d0140cfe1cf.scope.
Dec  3 18:04:11 compute-0 podman[206024]: 2025-12-03 18:04:10.990157507 +0000 UTC m=+0.042442626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c22861dcb8ed1dbfcbb3d11e7138cadd4975535b75d202679bfba3548f8694f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c22861dcb8ed1dbfcbb3d11e7138cadd4975535b75d202679bfba3548f8694f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c22861dcb8ed1dbfcbb3d11e7138cadd4975535b75d202679bfba3548f8694f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c22861dcb8ed1dbfcbb3d11e7138cadd4975535b75d202679bfba3548f8694f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:11 compute-0 podman[206024]: 2025-12-03 18:04:11.169516221 +0000 UTC m=+0.221801260 container init 2c39400aa4c72179c88a29ca6402685468fd007d4c540bc7af696d0140cfe1cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:04:11 compute-0 podman[206024]: 2025-12-03 18:04:11.180624917 +0000 UTC m=+0.232909936 container start 2c39400aa4c72179c88a29ca6402685468fd007d4c540bc7af696d0140cfe1cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gould, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 18:04:11 compute-0 podman[206024]: 2025-12-03 18:04:11.184717775 +0000 UTC m=+0.237002804 container attach 2c39400aa4c72179c88a29ca6402685468fd007d4c540bc7af696d0140cfe1cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:11 compute-0 kind_gould[206040]: {
Dec  3 18:04:11 compute-0 kind_gould[206040]:    "0": [
Dec  3 18:04:11 compute-0 kind_gould[206040]:        {
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "devices": [
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "/dev/loop3"
Dec  3 18:04:11 compute-0 kind_gould[206040]:            ],
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_name": "ceph_lv0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_size": "21470642176",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "name": "ceph_lv0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "tags": {
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.cluster_name": "ceph",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.crush_device_class": "",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.encrypted": "0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.osd_id": "0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.type": "block",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.vdo": "0"
Dec  3 18:04:11 compute-0 kind_gould[206040]:            },
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "type": "block",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "vg_name": "ceph_vg0"
Dec  3 18:04:11 compute-0 kind_gould[206040]:        }
Dec  3 18:04:11 compute-0 kind_gould[206040]:    ],
Dec  3 18:04:11 compute-0 kind_gould[206040]:    "1": [
Dec  3 18:04:11 compute-0 kind_gould[206040]:        {
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "devices": [
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "/dev/loop4"
Dec  3 18:04:11 compute-0 kind_gould[206040]:            ],
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_name": "ceph_lv1",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_size": "21470642176",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "name": "ceph_lv1",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "tags": {
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.cluster_name": "ceph",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.crush_device_class": "",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.encrypted": "0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.osd_id": "1",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.type": "block",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.vdo": "0"
Dec  3 18:04:11 compute-0 kind_gould[206040]:            },
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "type": "block",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "vg_name": "ceph_vg1"
Dec  3 18:04:11 compute-0 kind_gould[206040]:        }
Dec  3 18:04:11 compute-0 kind_gould[206040]:    ],
Dec  3 18:04:11 compute-0 kind_gould[206040]:    "2": [
Dec  3 18:04:11 compute-0 kind_gould[206040]:        {
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "devices": [
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "/dev/loop5"
Dec  3 18:04:11 compute-0 kind_gould[206040]:            ],
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_name": "ceph_lv2",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_size": "21470642176",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "name": "ceph_lv2",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "tags": {
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.cluster_name": "ceph",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.crush_device_class": "",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.encrypted": "0",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.osd_id": "2",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.type": "block",
Dec  3 18:04:11 compute-0 kind_gould[206040]:                "ceph.vdo": "0"
Dec  3 18:04:11 compute-0 kind_gould[206040]:            },
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "type": "block",
Dec  3 18:04:11 compute-0 kind_gould[206040]:            "vg_name": "ceph_vg2"
Dec  3 18:04:11 compute-0 kind_gould[206040]:        }
Dec  3 18:04:11 compute-0 kind_gould[206040]:    ]
Dec  3 18:04:11 compute-0 kind_gould[206040]: }
Dec  3 18:04:11 compute-0 systemd[1]: libpod-2c39400aa4c72179c88a29ca6402685468fd007d4c540bc7af696d0140cfe1cf.scope: Deactivated successfully.
Dec  3 18:04:11 compute-0 podman[206024]: 2025-12-03 18:04:11.957150599 +0000 UTC m=+1.009435628 container died 2c39400aa4c72179c88a29ca6402685468fd007d4c540bc7af696d0140cfe1cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c22861dcb8ed1dbfcbb3d11e7138cadd4975535b75d202679bfba3548f8694f-merged.mount: Deactivated successfully.
Dec  3 18:04:12 compute-0 podman[206024]: 2025-12-03 18:04:12.084125568 +0000 UTC m=+1.136410597 container remove 2c39400aa4c72179c88a29ca6402685468fd007d4c540bc7af696d0140cfe1cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gould, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:04:12 compute-0 systemd[1]: libpod-conmon-2c39400aa4c72179c88a29ca6402685468fd007d4c540bc7af696d0140cfe1cf.scope: Deactivated successfully.
Dec  3 18:04:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Dec  3 18:04:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  3 18:04:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:04:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:04:12 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec  3 18:04:12 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec  3 18:04:12 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  3 18:04:12 compute-0 ceph-mon[192802]: Deploying daemon osd.0 on compute-0
Dec  3 18:04:13 compute-0 podman[206199]: 2025-12-03 18:04:13.006866239 +0000 UTC m=+0.075360984 container create 99ccc589542ad96bc9c2a26fd5c72df0e464cc87fda55c83675e95af6ea9a5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:04:13 compute-0 systemd[1]: Started libpod-conmon-99ccc589542ad96bc9c2a26fd5c72df0e464cc87fda55c83675e95af6ea9a5e2.scope.
Dec  3 18:04:13 compute-0 podman[206199]: 2025-12-03 18:04:12.962718823 +0000 UTC m=+0.031213648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:13 compute-0 podman[206199]: 2025-12-03 18:04:13.112364196 +0000 UTC m=+0.180858961 container init 99ccc589542ad96bc9c2a26fd5c72df0e464cc87fda55c83675e95af6ea9a5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:04:13 compute-0 podman[206199]: 2025-12-03 18:04:13.123909852 +0000 UTC m=+0.192404577 container start 99ccc589542ad96bc9c2a26fd5c72df0e464cc87fda55c83675e95af6ea9a5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:04:13 compute-0 podman[206199]: 2025-12-03 18:04:13.128902541 +0000 UTC m=+0.197397316 container attach 99ccc589542ad96bc9c2a26fd5c72df0e464cc87fda55c83675e95af6ea9a5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:13 compute-0 great_bohr[206215]: 167 167
Dec  3 18:04:13 compute-0 systemd[1]: libpod-99ccc589542ad96bc9c2a26fd5c72df0e464cc87fda55c83675e95af6ea9a5e2.scope: Deactivated successfully.
Dec  3 18:04:13 compute-0 podman[206199]: 2025-12-03 18:04:13.132387185 +0000 UTC m=+0.200882000 container died 99ccc589542ad96bc9c2a26fd5c72df0e464cc87fda55c83675e95af6ea9a5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f7ab3cd48f386925580fd4c08dbda97a8e358b8eca7b9577d78829749ccb74-merged.mount: Deactivated successfully.
Dec  3 18:04:13 compute-0 podman[206199]: 2025-12-03 18:04:13.212411081 +0000 UTC m=+0.280905836 container remove 99ccc589542ad96bc9c2a26fd5c72df0e464cc87fda55c83675e95af6ea9a5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:04:13 compute-0 systemd[1]: libpod-conmon-99ccc589542ad96bc9c2a26fd5c72df0e464cc87fda55c83675e95af6ea9a5e2.scope: Deactivated successfully.
Dec  3 18:04:13 compute-0 podman[206246]: 2025-12-03 18:04:13.580199616 +0000 UTC m=+0.077172878 container create c405807933efa98ac383f640160acd7e52dd194c23aa242a281aef781de4dcd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate-test, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:04:13 compute-0 podman[206246]: 2025-12-03 18:04:13.553898936 +0000 UTC m=+0.050872228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:13 compute-0 systemd[1]: Started libpod-conmon-c405807933efa98ac383f640160acd7e52dd194c23aa242a281aef781de4dcd6.scope.
Dec  3 18:04:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e69787847818fd90a7bb59a018780ed8d89ef927eec6e7aa4d62cacf0e68b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e69787847818fd90a7bb59a018780ed8d89ef927eec6e7aa4d62cacf0e68b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e69787847818fd90a7bb59a018780ed8d89ef927eec6e7aa4d62cacf0e68b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e69787847818fd90a7bb59a018780ed8d89ef927eec6e7aa4d62cacf0e68b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e69787847818fd90a7bb59a018780ed8d89ef927eec6e7aa4d62cacf0e68b4/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:13 compute-0 podman[206246]: 2025-12-03 18:04:13.703693203 +0000 UTC m=+0.200666485 container init c405807933efa98ac383f640160acd7e52dd194c23aa242a281aef781de4dcd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Dec  3 18:04:13 compute-0 podman[206246]: 2025-12-03 18:04:13.720810672 +0000 UTC m=+0.217783974 container start c405807933efa98ac383f640160acd7e52dd194c23aa242a281aef781de4dcd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 18:04:13 compute-0 podman[206246]: 2025-12-03 18:04:13.728067176 +0000 UTC m=+0.225040788 container attach c405807933efa98ac383f640160acd7e52dd194c23aa242a281aef781de4dcd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:04:13
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [balancer INFO root] No pools available
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:04:14 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate-test[206261]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  3 18:04:14 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate-test[206261]:                            [--no-systemd] [--no-tmpfs]
Dec  3 18:04:14 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate-test[206261]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  3 18:04:14 compute-0 systemd[1]: libpod-c405807933efa98ac383f640160acd7e52dd194c23aa242a281aef781de4dcd6.scope: Deactivated successfully.
Dec  3 18:04:14 compute-0 podman[206246]: 2025-12-03 18:04:14.383136279 +0000 UTC m=+0.880109551 container died c405807933efa98ac383f640160acd7e52dd194c23aa242a281aef781de4dcd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate-test, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:04:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2e69787847818fd90a7bb59a018780ed8d89ef927eec6e7aa4d62cacf0e68b4-merged.mount: Deactivated successfully.
Dec  3 18:04:14 compute-0 podman[206246]: 2025-12-03 18:04:14.452883189 +0000 UTC m=+0.949856451 container remove c405807933efa98ac383f640160acd7e52dd194c23aa242a281aef781de4dcd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate-test, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:04:14 compute-0 systemd[1]: libpod-conmon-c405807933efa98ac383f640160acd7e52dd194c23aa242a281aef781de4dcd6.scope: Deactivated successfully.
Dec  3 18:04:14 compute-0 systemd[1]: Reloading.
Dec  3 18:04:15 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:04:15 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:04:15 compute-0 systemd[1]: Reloading.
Dec  3 18:04:15 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:04:15 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:04:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:15 compute-0 systemd[1]: Starting Ceph osd.0 for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:04:16 compute-0 podman[206414]: 2025-12-03 18:04:16.253509588 +0000 UTC m=+0.063963563 container create 3219936f109ad5feca36088abcd12ac77d6a346ebf1e96de0a2f58dfa99beadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:16 compute-0 podman[206414]: 2025-12-03 18:04:16.22729833 +0000 UTC m=+0.037752345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10eb3ec1c2a8d45af61044b79748c07e1e1c3c8b4241f8c35f37ef16d2d529b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10eb3ec1c2a8d45af61044b79748c07e1e1c3c8b4241f8c35f37ef16d2d529b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10eb3ec1c2a8d45af61044b79748c07e1e1c3c8b4241f8c35f37ef16d2d529b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10eb3ec1c2a8d45af61044b79748c07e1e1c3c8b4241f8c35f37ef16d2d529b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10eb3ec1c2a8d45af61044b79748c07e1e1c3c8b4241f8c35f37ef16d2d529b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:16 compute-0 podman[206414]: 2025-12-03 18:04:16.360786107 +0000 UTC m=+0.171240082 container init 3219936f109ad5feca36088abcd12ac77d6a346ebf1e96de0a2f58dfa99beadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:04:16 compute-0 podman[206414]: 2025-12-03 18:04:16.396958143 +0000 UTC m=+0.207412098 container start 3219936f109ad5feca36088abcd12ac77d6a346ebf1e96de0a2f58dfa99beadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:04:16 compute-0 podman[206414]: 2025-12-03 18:04:16.40228919 +0000 UTC m=+0.212743145 container attach 3219936f109ad5feca36088abcd12ac77d6a346ebf1e96de0a2f58dfa99beadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:17 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate[206428]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 18:04:17 compute-0 bash[206414]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 18:04:17 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate[206428]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec  3 18:04:17 compute-0 bash[206414]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec  3 18:04:17 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate[206428]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec  3 18:04:17 compute-0 bash[206414]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec  3 18:04:17 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate[206428]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 18:04:17 compute-0 bash[206414]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 18:04:17 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate[206428]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 18:04:17 compute-0 bash[206414]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 18:04:17 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate[206428]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 18:04:17 compute-0 bash[206414]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 18:04:17 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate[206428]: --> ceph-volume raw activate successful for osd ID: 0
Dec  3 18:04:17 compute-0 bash[206414]: --> ceph-volume raw activate successful for osd ID: 0
Dec  3 18:04:17 compute-0 systemd[1]: libpod-3219936f109ad5feca36088abcd12ac77d6a346ebf1e96de0a2f58dfa99beadc.scope: Deactivated successfully.
Dec  3 18:04:17 compute-0 systemd[1]: libpod-3219936f109ad5feca36088abcd12ac77d6a346ebf1e96de0a2f58dfa99beadc.scope: Consumed 1.289s CPU time.
Dec  3 18:04:17 compute-0 podman[206414]: 2025-12-03 18:04:17.669340814 +0000 UTC m=+1.479794779 container died 3219936f109ad5feca36088abcd12ac77d6a346ebf1e96de0a2f58dfa99beadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:04:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a10eb3ec1c2a8d45af61044b79748c07e1e1c3c8b4241f8c35f37ef16d2d529b-merged.mount: Deactivated successfully.
Dec  3 18:04:17 compute-0 podman[206414]: 2025-12-03 18:04:17.736959294 +0000 UTC m=+1.547413249 container remove 3219936f109ad5feca36088abcd12ac77d6a346ebf1e96de0a2f58dfa99beadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:04:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:18 compute-0 podman[206625]: 2025-12-03 18:04:18.056991925 +0000 UTC m=+0.062407774 container create 3a6bbdaae9a72961c5b8f347529df1e3b3d612802770aeb1c69618d2321fd5b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc84377fec523f7e4cb8d484dd5441c8cf54ba2b734a3f188a032fffaaed975/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc84377fec523f7e4cb8d484dd5441c8cf54ba2b734a3f188a032fffaaed975/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc84377fec523f7e4cb8d484dd5441c8cf54ba2b734a3f188a032fffaaed975/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc84377fec523f7e4cb8d484dd5441c8cf54ba2b734a3f188a032fffaaed975/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc84377fec523f7e4cb8d484dd5441c8cf54ba2b734a3f188a032fffaaed975/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:18 compute-0 podman[206625]: 2025-12-03 18:04:18.024992439 +0000 UTC m=+0.030408308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:18 compute-0 podman[206625]: 2025-12-03 18:04:18.143729382 +0000 UTC m=+0.149145281 container init 3a6bbdaae9a72961c5b8f347529df1e3b3d612802770aeb1c69618d2321fd5b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 18:04:18 compute-0 podman[206625]: 2025-12-03 18:04:18.16661305 +0000 UTC m=+0.172028899 container start 3a6bbdaae9a72961c5b8f347529df1e3b3d612802770aeb1c69618d2321fd5b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 18:04:18 compute-0 bash[206625]: 3a6bbdaae9a72961c5b8f347529df1e3b3d612802770aeb1c69618d2321fd5b7
Dec  3 18:04:18 compute-0 systemd[1]: Started Ceph osd.0 for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:04:18 compute-0 podman[206637]: 2025-12-03 18:04:18.203534814 +0000 UTC m=+0.099853041 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 18:04:18 compute-0 ceph-osd[206694]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 18:04:18 compute-0 ceph-osd[206694]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  3 18:04:18 compute-0 ceph-osd[206694]: pidfile_write: ignore empty --pid-file
Dec  3 18:04:18 compute-0 podman[206641]: 2025-12-03 18:04:18.221085774 +0000 UTC m=+0.110510527 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 18:04:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc15007800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc15007800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc15007800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc15007800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc15e49800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc15e49800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc15e49800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc15e49800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc15e49800 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 18:04:18 compute-0 podman[206657]: 2025-12-03 18:04:18.240925609 +0000 UTC m=+0.100516947 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc., version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543)
Dec  3 18:04:18 compute-0 podman[206650]: 2025-12-03 18:04:18.242652411 +0000 UTC m=+0.119213495 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 18:04:18 compute-0 podman[206643]: 2025-12-03 18:04:18.247060016 +0000 UTC m=+0.127149205 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 18:04:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:04:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Dec  3 18:04:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  3 18:04:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:04:18 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:04:18 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec  3 18:04:18 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc15007800 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 18:04:18 compute-0 ceph-osd[206694]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec  3 18:04:18 compute-0 ceph-osd[206694]: load: jerasure load: lrc 
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:18 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 18:04:19 compute-0 ceph-osd[206694]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  3 18:04:19 compute-0 ceph-osd[206694]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d0c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d1400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d1400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d1400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d1400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluefs mount
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluefs mount shared_bdev_used = 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: RocksDB version: 7.9.2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Git sha 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: DB SUMMARY
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: DB Session ID:  BF8AWTFOS2DCVHXDMY3C
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: CURRENT file:  CURRENT
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                         Options.error_if_exists: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.create_if_missing: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                                     Options.env: 0x55fc15e9bd50
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                                Options.info_log: 0x55fc15092800
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                              Options.statistics: (nil)
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.use_fsync: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                              Options.db_log_dir: 
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.write_buffer_manager: 0x55fc15f84460
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.unordered_write: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.row_cache: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                              Options.wal_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.two_write_queues: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.wal_compression: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.atomic_flush: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.max_background_jobs: 4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.max_background_compactions: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.max_subcompactions: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.max_open_files: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Compression algorithms supported:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kZSTD supported: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kXpressCompression supported: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kZlibCompression supported: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 podman[206873]: 2025-12-03 18:04:19.122508785 +0000 UTC m=+0.085473687 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092e60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092e60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092e60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ff3149ee-5b51-4d1e-8ef8-21d7b5a5f753
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785059110836, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785059111232, "job": 1, "event": "recovery_finished"}
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: freelist init
Dec  3 18:04:19 compute-0 ceph-osd[206694]: freelist _read_cfg
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluefs umount
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d1400 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 18:04:19 compute-0 podman[207116]: 2025-12-03 18:04:19.27012775 +0000 UTC m=+0.061556995 container create 55897a56c76e074a78cf563b21d85f33d6325c124f29cce6b3e44cd3d7a63b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mahavira, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:04:19 compute-0 podman[207116]: 2025-12-03 18:04:19.242926188 +0000 UTC m=+0.034355453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d1400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d1400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d1400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bdev(0x55fc151d1400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluefs mount
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluefs mount shared_bdev_used = 4718592
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: RocksDB version: 7.9.2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Git sha 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: DB SUMMARY
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: DB Session ID:  BF8AWTFOS2DCVHXDMY3D
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: CURRENT file:  CURRENT
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                         Options.error_if_exists: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.create_if_missing: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                                     Options.env: 0x55fc160383f0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                                Options.info_log: 0x55fc150931c0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                              Options.statistics: (nil)
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.use_fsync: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                              Options.db_log_dir: 
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.write_buffer_manager: 0x55fc15f84460
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.unordered_write: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.row_cache: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                              Options.wal_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.two_write_queues: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.wal_compression: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.atomic_flush: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.max_background_jobs: 4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.max_background_compactions: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.max_subcompactions: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.max_open_files: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Compression algorithms supported:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kZSTD supported: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kXpressCompression supported: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kZlibCompression supported: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc150929c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc150929c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc150929c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc150929c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc150929c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc150929c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc150929c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc15092f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fc1507a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ff3149ee-5b51-4d1e-8ef8-21d7b5a5f753
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785059388010, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 18:04:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  3 18:04:19 compute-0 ceph-mon[192802]: Deploying daemon osd.1 on compute-0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785059422190, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785059, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff3149ee-5b51-4d1e-8ef8-21d7b5a5f753", "db_session_id": "BF8AWTFOS2DCVHXDMY3D", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785059426684, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785059, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff3149ee-5b51-4d1e-8ef8-21d7b5a5f753", "db_session_id": "BF8AWTFOS2DCVHXDMY3D", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785059434649, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785059, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff3149ee-5b51-4d1e-8ef8-21d7b5a5f753", "db_session_id": "BF8AWTFOS2DCVHXDMY3D", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785059437942, "job": 1, "event": "recovery_finished"}
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  3 18:04:19 compute-0 systemd[1]: Started libpod-conmon-55897a56c76e074a78cf563b21d85f33d6325c124f29cce6b3e44cd3d7a63b93.scope.
Dec  3 18:04:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:19 compute-0 podman[207116]: 2025-12-03 18:04:19.524795597 +0000 UTC m=+0.316224872 container init 55897a56c76e074a78cf563b21d85f33d6325c124f29cce6b3e44cd3d7a63b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:19 compute-0 podman[207116]: 2025-12-03 18:04:19.536765183 +0000 UTC m=+0.328194428 container start 55897a56c76e074a78cf563b21d85f33d6325c124f29cce6b3e44cd3d7a63b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mahavira, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 18:04:19 compute-0 podman[207116]: 2025-12-03 18:04:19.541916806 +0000 UTC m=+0.333346071 container attach 55897a56c76e074a78cf563b21d85f33d6325c124f29cce6b3e44cd3d7a63b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mahavira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fc1607a000
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: DB pointer 0x55fc150b5a00
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec  3 18:04:19 compute-0 ceph-osd[206694]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fc1507add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fc1507add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec  3 18:04:19 compute-0 kind_mahavira[207314]: 167 167
Dec  3 18:04:19 compute-0 systemd[1]: libpod-55897a56c76e074a78cf563b21d85f33d6325c124f29cce6b3e44cd3d7a63b93.scope: Deactivated successfully.
Dec  3 18:04:19 compute-0 podman[207116]: 2025-12-03 18:04:19.547017658 +0000 UTC m=+0.338446903 container died 55897a56c76e074a78cf563b21d85f33d6325c124f29cce6b3e44cd3d7a63b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 18:04:19 compute-0 ceph-osd[206694]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  3 18:04:19 compute-0 ceph-osd[206694]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  3 18:04:19 compute-0 ceph-osd[206694]: _get_class not permitted to load lua
Dec  3 18:04:19 compute-0 ceph-osd[206694]: _get_class not permitted to load sdk
Dec  3 18:04:19 compute-0 ceph-osd[206694]: _get_class not permitted to load test_remote_reads
Dec  3 18:04:19 compute-0 ceph-osd[206694]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  3 18:04:19 compute-0 ceph-osd[206694]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  3 18:04:19 compute-0 ceph-osd[206694]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  3 18:04:19 compute-0 ceph-osd[206694]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  3 18:04:19 compute-0 ceph-osd[206694]: osd.0 0 load_pgs
Dec  3 18:04:19 compute-0 ceph-osd[206694]: osd.0 0 load_pgs opened 0 pgs
Dec  3 18:04:19 compute-0 ceph-osd[206694]: osd.0 0 log_to_monitors true
Dec  3 18:04:19 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0[206640]: 2025-12-03T18:04:19.553+0000 7f21dd85f740 -1 osd.0 0 log_to_monitors true
Dec  3 18:04:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Dec  3 18:04:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3545763200,v1:192.168.122.100:6803/3545763200]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  3 18:04:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-165b0fd48ce55f074d61c79edad1bedd7d53ba3b399705ad733973e01d058105-merged.mount: Deactivated successfully.
Dec  3 18:04:19 compute-0 podman[207116]: 2025-12-03 18:04:19.597311822 +0000 UTC m=+0.388741067 container remove 55897a56c76e074a78cf563b21d85f33d6325c124f29cce6b3e44cd3d7a63b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mahavira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:19 compute-0 systemd[1]: libpod-conmon-55897a56c76e074a78cf563b21d85f33d6325c124f29cce6b3e44cd3d7a63b93.scope: Deactivated successfully.
Dec  3 18:04:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:20 compute-0 podman[207380]: 2025-12-03 18:04:20.278688705 +0000 UTC m=+0.127410091 container create 0052fd4a0afc8234fa12d03e3e75b64f79d904ec1dbcf02d13fd92dcc0367c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:04:20 compute-0 podman[207380]: 2025-12-03 18:04:20.188389043 +0000 UTC m=+0.037110439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:20 compute-0 systemd[1]: Started libpod-conmon-0052fd4a0afc8234fa12d03e3e75b64f79d904ec1dbcf02d13fd92dcc0367c7d.scope.
Dec  3 18:04:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7806b708af687b1e4a40cea005f761c4c2ad5e5cef449cd3b57fe4f05eb296/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7806b708af687b1e4a40cea005f761c4c2ad5e5cef449cd3b57fe4f05eb296/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7806b708af687b1e4a40cea005f761c4c2ad5e5cef449cd3b57fe4f05eb296/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7806b708af687b1e4a40cea005f761c4c2ad5e5cef449cd3b57fe4f05eb296/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7806b708af687b1e4a40cea005f761c4c2ad5e5cef449cd3b57fe4f05eb296/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:20 compute-0 podman[207380]: 2025-12-03 18:04:20.395744948 +0000 UTC m=+0.244466344 container init 0052fd4a0afc8234fa12d03e3e75b64f79d904ec1dbcf02d13fd92dcc0367c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:04:20 compute-0 podman[207380]: 2025-12-03 18:04:20.415215403 +0000 UTC m=+0.263936769 container start 0052fd4a0afc8234fa12d03e3e75b64f79d904ec1dbcf02d13fd92dcc0367c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate-test, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:04:20 compute-0 podman[207380]: 2025-12-03 18:04:20.420694555 +0000 UTC m=+0.269415921 container attach 0052fd4a0afc8234fa12d03e3e75b64f79d904ec1dbcf02d13fd92dcc0367c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate-test, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec  3 18:04:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 18:04:20 compute-0 ceph-mon[192802]: from='osd.0 [v2:192.168.122.100:6802/3545763200,v1:192.168.122.100:6803/3545763200]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  3 18:04:20 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3545763200,v1:192.168.122.100:6803/3545763200]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  3 18:04:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec  3 18:04:20 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec  3 18:04:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  3 18:04:20 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3545763200,v1:192.168.122.100:6803/3545763200]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 18:04:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  3 18:04:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:20 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:20 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:20 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:20 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:20 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:20 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:20 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  3 18:04:20 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  3 18:04:21 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate-test[207396]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  3 18:04:21 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate-test[207396]:                            [--no-systemd] [--no-tmpfs]
Dec  3 18:04:21 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate-test[207396]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  3 18:04:21 compute-0 systemd[1]: libpod-0052fd4a0afc8234fa12d03e3e75b64f79d904ec1dbcf02d13fd92dcc0367c7d.scope: Deactivated successfully.
Dec  3 18:04:21 compute-0 conmon[207396]: conmon 0052fd4a0afc8234fa12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0052fd4a0afc8234fa12d03e3e75b64f79d904ec1dbcf02d13fd92dcc0367c7d.scope/container/memory.events
Dec  3 18:04:21 compute-0 podman[207380]: 2025-12-03 18:04:21.129635398 +0000 UTC m=+0.978356794 container died 0052fd4a0afc8234fa12d03e3e75b64f79d904ec1dbcf02d13fd92dcc0367c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:04:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d7806b708af687b1e4a40cea005f761c4c2ad5e5cef449cd3b57fe4f05eb296-merged.mount: Deactivated successfully.
Dec  3 18:04:21 compute-0 podman[207380]: 2025-12-03 18:04:21.212773448 +0000 UTC m=+1.061494804 container remove 0052fd4a0afc8234fa12d03e3e75b64f79d904ec1dbcf02d13fd92dcc0367c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate-test, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 18:04:21 compute-0 systemd[1]: libpod-conmon-0052fd4a0afc8234fa12d03e3e75b64f79d904ec1dbcf02d13fd92dcc0367c7d.scope: Deactivated successfully.
Dec  3 18:04:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec  3 18:04:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 18:04:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3545763200,v1:192.168.122.100:6803/3545763200]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 18:04:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec  3 18:04:21 compute-0 ceph-osd[206694]: osd.0 0 done with init, starting boot process
Dec  3 18:04:21 compute-0 ceph-osd[206694]: osd.0 0 start_boot
Dec  3 18:04:21 compute-0 ceph-osd[206694]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  3 18:04:21 compute-0 ceph-osd[206694]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  3 18:04:21 compute-0 ceph-osd[206694]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  3 18:04:21 compute-0 ceph-osd[206694]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  3 18:04:21 compute-0 ceph-osd[206694]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec  3 18:04:21 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec  3 18:04:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:21 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:21 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:21 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:21 compute-0 ceph-mon[192802]: from='osd.0 [v2:192.168.122.100:6802/3545763200,v1:192.168.122.100:6803/3545763200]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  3 18:04:21 compute-0 ceph-mon[192802]: from='osd.0 [v2:192.168.122.100:6802/3545763200,v1:192.168.122.100:6803/3545763200]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 18:04:21 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3545763200; not ready for session (expect reconnect)
Dec  3 18:04:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:21 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:22 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3545763200; not ready for session (expect reconnect)
Dec  3 18:04:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:22 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:22 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:22 compute-0 systemd[1]: Reloading.
Dec  3 18:04:22 compute-0 ceph-mon[192802]: from='osd.0 [v2:192.168.122.100:6802/3545763200,v1:192.168.122.100:6803/3545763200]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 18:04:22 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:04:22 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:04:23 compute-0 systemd[1]: Reloading.
Dec  3 18:04:23 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:04:23 compute-0 python3[207499]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:04:23 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:04:23 compute-0 podman[207537]: 2025-12-03 18:04:23.318534453 +0000 UTC m=+0.081792269 container create 4e11c987cea467efa8d9ca88e0cab9c2e7010431febddc08958ad665c5521389 (image=quay.io/ceph/ceph:v18, name=tender_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:23 compute-0 podman[207537]: 2025-12-03 18:04:23.27374851 +0000 UTC m=+0.037006376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:04:23 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3545763200; not ready for session (expect reconnect)
Dec  3 18:04:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:23 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:23 compute-0 systemd[1]: Started libpod-conmon-4e11c987cea467efa8d9ca88e0cab9c2e7010431febddc08958ad665c5521389.scope.
Dec  3 18:04:23 compute-0 systemd[1]: Starting Ceph osd.1 for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:04:23 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fcbf2ffddac8e483c01031814b5592f52e071d0331c4f9ae05694e474a78542/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fcbf2ffddac8e483c01031814b5592f52e071d0331c4f9ae05694e474a78542/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fcbf2ffddac8e483c01031814b5592f52e071d0331c4f9ae05694e474a78542/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:23 compute-0 podman[207537]: 2025-12-03 18:04:23.951971677 +0000 UTC m=+0.715229593 container init 4e11c987cea467efa8d9ca88e0cab9c2e7010431febddc08958ad665c5521389 (image=quay.io/ceph/ceph:v18, name=tender_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:04:23 compute-0 podman[207537]: 2025-12-03 18:04:23.962316586 +0000 UTC m=+0.725574402 container start 4e11c987cea467efa8d9ca88e0cab9c2e7010431febddc08958ad665c5521389 (image=quay.io/ceph/ceph:v18, name=tender_elbakyan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:23 compute-0 podman[207537]: 2025-12-03 18:04:23.988136844 +0000 UTC m=+0.751394700 container attach 4e11c987cea467efa8d9ca88e0cab9c2e7010431febddc08958ad665c5521389 (image=quay.io/ceph/ceph:v18, name=tender_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:04:24 compute-0 podman[207603]: 2025-12-03 18:04:24.231692965 +0000 UTC m=+0.034185819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:24 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3545763200; not ready for session (expect reconnect)
Dec  3 18:04:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:24 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:24 compute-0 podman[207603]: 2025-12-03 18:04:24.470883281 +0000 UTC m=+0.273376135 container create 90d5b28a79a9b9cc03a84102ac0a8a439f30bd0af2ee3bef4505cfaafc48a8fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:04:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  3 18:04:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2492820827' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  3 18:04:24 compute-0 tender_elbakyan[207555]: 
Dec  3 18:04:24 compute-0 tender_elbakyan[207555]: {"fsid":"c1caf3ba-b2a5-5005-a11e-e955c344dccc","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":118,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764785045,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-03T18:04:15.769428+0000","services":{}},"progress_events":{}}
Dec  3 18:04:24 compute-0 systemd[1]: libpod-4e11c987cea467efa8d9ca88e0cab9c2e7010431febddc08958ad665c5521389.scope: Deactivated successfully.
Dec  3 18:04:24 compute-0 podman[207537]: 2025-12-03 18:04:24.643046663 +0000 UTC m=+1.406304479 container died 4e11c987cea467efa8d9ca88e0cab9c2e7010431febddc08958ad665c5521389 (image=quay.io/ceph/ceph:v18, name=tender_elbakyan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc63c1abc0615618e244320a4ab34dd3b28a9ef89f5d65b61fd91d7f920bdbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc63c1abc0615618e244320a4ab34dd3b28a9ef89f5d65b61fd91d7f920bdbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc63c1abc0615618e244320a4ab34dd3b28a9ef89f5d65b61fd91d7f920bdbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc63c1abc0615618e244320a4ab34dd3b28a9ef89f5d65b61fd91d7f920bdbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc63c1abc0615618e244320a4ab34dd3b28a9ef89f5d65b61fd91d7f920bdbf/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:24 compute-0 podman[207603]: 2025-12-03 18:04:24.823549955 +0000 UTC m=+0.626042839 container init 90d5b28a79a9b9cc03a84102ac0a8a439f30bd0af2ee3bef4505cfaafc48a8fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:04:24 compute-0 podman[207603]: 2025-12-03 18:04:24.835955852 +0000 UTC m=+0.638448706 container start 90d5b28a79a9b9cc03a84102ac0a8a439f30bd0af2ee3bef4505cfaafc48a8fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Dec  3 18:04:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fcbf2ffddac8e483c01031814b5592f52e071d0331c4f9ae05694e474a78542-merged.mount: Deactivated successfully.
Dec  3 18:04:24 compute-0 podman[207603]: 2025-12-03 18:04:24.86970293 +0000 UTC m=+0.672195824 container attach 90d5b28a79a9b9cc03a84102ac0a8a439f30bd0af2ee3bef4505cfaafc48a8fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 18:04:24 compute-0 podman[207537]: 2025-12-03 18:04:24.966572429 +0000 UTC m=+1.729830255 container remove 4e11c987cea467efa8d9ca88e0cab9c2e7010431febddc08958ad665c5521389 (image=quay.io/ceph/ceph:v18, name=tender_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:24 compute-0 systemd[1]: libpod-conmon-4e11c987cea467efa8d9ca88e0cab9c2e7010431febddc08958ad665c5521389.scope: Deactivated successfully.
Dec  3 18:04:25 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3545763200; not ready for session (expect reconnect)
Dec  3 18:04:25 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:25 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:25 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:25 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate[207640]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 18:04:25 compute-0 bash[207603]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 18:04:25 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate[207640]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec  3 18:04:25 compute-0 bash[207603]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec  3 18:04:25 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate[207640]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec  3 18:04:25 compute-0 bash[207603]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec  3 18:04:25 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate[207640]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 18:04:25 compute-0 bash[207603]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 18:04:25 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate[207640]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:25 compute-0 bash[207603]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:25 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate[207640]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 18:04:25 compute-0 bash[207603]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 18:04:25 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate[207640]: --> ceph-volume raw activate successful for osd ID: 1
Dec  3 18:04:25 compute-0 bash[207603]: --> ceph-volume raw activate successful for osd ID: 1
Dec  3 18:04:25 compute-0 systemd[1]: libpod-90d5b28a79a9b9cc03a84102ac0a8a439f30bd0af2ee3bef4505cfaafc48a8fc.scope: Deactivated successfully.
Dec  3 18:04:25 compute-0 systemd[1]: libpod-90d5b28a79a9b9cc03a84102ac0a8a439f30bd0af2ee3bef4505cfaafc48a8fc.scope: Consumed 1.176s CPU time.
Dec  3 18:04:26 compute-0 podman[207603]: 2025-12-03 18:04:25.999962709 +0000 UTC m=+1.802455533 container died 90d5b28a79a9b9cc03a84102ac0a8a439f30bd0af2ee3bef4505cfaafc48a8fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bc63c1abc0615618e244320a4ab34dd3b28a9ef89f5d65b61fd91d7f920bdbf-merged.mount: Deactivated successfully.
Dec  3 18:04:26 compute-0 podman[207603]: 2025-12-03 18:04:26.180290426 +0000 UTC m=+1.982783280 container remove 90d5b28a79a9b9cc03a84102ac0a8a439f30bd0af2ee3bef4505cfaafc48a8fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1-activate, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:04:26 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3545763200; not ready for session (expect reconnect)
Dec  3 18:04:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:26 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:26 compute-0 podman[207833]: 2025-12-03 18:04:26.640336391 +0000 UTC m=+0.101029091 container create 831ecf787892e31a0d033c36f2819e8ee22833d441f7e7b95e288e6036d2be8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:04:26 compute-0 podman[207833]: 2025-12-03 18:04:26.594199966 +0000 UTC m=+0.054892656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0684bd525cdda8d327c11f0101c62147d750eb67bc400cee4f3155fe7709097/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0684bd525cdda8d327c11f0101c62147d750eb67bc400cee4f3155fe7709097/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0684bd525cdda8d327c11f0101c62147d750eb67bc400cee4f3155fe7709097/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0684bd525cdda8d327c11f0101c62147d750eb67bc400cee4f3155fe7709097/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0684bd525cdda8d327c11f0101c62147d750eb67bc400cee4f3155fe7709097/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:26 compute-0 podman[207833]: 2025-12-03 18:04:26.817304477 +0000 UTC m=+0.277997087 container init 831ecf787892e31a0d033c36f2819e8ee22833d441f7e7b95e288e6036d2be8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:04:26 compute-0 podman[207833]: 2025-12-03 18:04:26.842428299 +0000 UTC m=+0.303120919 container start 831ecf787892e31a0d033c36f2819e8ee22833d441f7e7b95e288e6036d2be8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:04:26 compute-0 bash[207833]: 831ecf787892e31a0d033c36f2819e8ee22833d441f7e7b95e288e6036d2be8c
Dec  3 18:04:26 compute-0 systemd[1]: Started Ceph osd.1 for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:04:26 compute-0 ceph-osd[207851]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 18:04:26 compute-0 ceph-osd[207851]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  3 18:04:26 compute-0 ceph-osd[207851]: pidfile_write: ignore empty --pid-file
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bdev(0x5562f645f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bdev(0x5562f645f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bdev(0x5562f645f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bdev(0x5562f645f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bdev(0x5562f72a1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bdev(0x5562f72a1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bdev(0x5562f72a1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bdev(0x5562f72a1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bdev(0x5562f72a1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 18:04:26 compute-0 ceph-osd[207851]: bdev(0x5562f645f800 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 18:04:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:04:26 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:04:26 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Dec  3 18:04:26 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  3 18:04:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:04:27 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:04:27 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec  3 18:04:27 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec  3 18:04:27 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:27 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:27 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  3 18:04:27 compute-0 ceph-osd[207851]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec  3 18:04:27 compute-0 ceph-osd[207851]: load: jerasure load: lrc 
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 18:04:27 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3545763200; not ready for session (expect reconnect)
Dec  3 18:04:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:27 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:27 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 18:04:27 compute-0 ceph-osd[207851]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  3 18:04:27 compute-0 ceph-osd[207851]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7322c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7323400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7323400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7323400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7323400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluefs mount
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluefs mount shared_bdev_used = 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: RocksDB version: 7.9.2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Git sha 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: DB SUMMARY
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: DB Session ID:  13YRN684H3IBL1RWATLD
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: CURRENT file:  CURRENT
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                         Options.error_if_exists: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.create_if_missing: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                                     Options.env: 0x5562f72f3d50
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                                Options.info_log: 0x5562f64e67e0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                              Options.statistics: (nil)
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.use_fsync: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                              Options.db_log_dir: 
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.write_buffer_manager: 0x5562f73f8460
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.unordered_write: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.row_cache: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                              Options.wal_filter: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.two_write_queues: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.wal_compression: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.atomic_flush: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.max_background_jobs: 4
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.max_background_compactions: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.max_subcompactions: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.max_open_files: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Compression algorithms supported:
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: #011kZSTD supported: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: #011kXpressCompression supported: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: #011kZlibCompression supported: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f64e6200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f64e6200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f64e6200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f64e6200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f64e6200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f64e6200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f64e6200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f64e6180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f64e6180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f64e6180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c4115f09-5dc7-4349-89f5-f532c1c1fd46
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785067814581, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785067814857, "job": 1, "event": "recovery_finished"}
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec  3 18:04:27 compute-0 ceph-osd[207851]: freelist init
Dec  3 18:04:27 compute-0 ceph-osd[207851]: freelist _read_cfg
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 18:04:27 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bluefs umount
Dec  3 18:04:27 compute-0 ceph-osd[207851]: bdev(0x5562f7323400 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 18:04:27 compute-0 podman[208208]: 2025-12-03 18:04:27.941645665 +0000 UTC m=+0.071888762 container create 6874e72a57a68a4e32a7614893795556fad38df8ea92d7fd341683099eb6a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:04:27 compute-0 systemd[1]: Started libpod-conmon-6874e72a57a68a4e32a7614893795556fad38df8ea92d7fd341683099eb6a0ce.scope.
Dec  3 18:04:28 compute-0 podman[208208]: 2025-12-03 18:04:27.907030886 +0000 UTC m=+0.037274063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bdev(0x5562f7323400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bdev(0x5562f7323400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bdev(0x5562f7323400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bdev(0x5562f7323400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bluefs mount
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 18:04:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bluefs mount shared_bdev_used = 4718592
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: RocksDB version: 7.9.2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Git sha 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: DB SUMMARY
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: DB Session ID:  13YRN684H3IBL1RWATLC
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: CURRENT file:  CURRENT
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                         Options.error_if_exists: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.create_if_missing: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                                     Options.env: 0x5562f663b960
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                                Options.info_log: 0x5562f67acec0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                              Options.statistics: (nil)
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.use_fsync: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                              Options.db_log_dir: 
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.write_buffer_manager: 0x5562f73f86e0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.unordered_write: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.row_cache: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                              Options.wal_filter: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.two_write_queues: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.wal_compression: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.atomic_flush: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.max_background_jobs: 4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.max_background_compactions: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.max_subcompactions: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.max_open_files: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Compression algorithms supported:
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: #011kZSTD supported: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: #011kXpressCompression supported: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: #011kZlibCompression supported: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f72ef740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d34b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f72ef740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d34b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f72ef740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d34b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f72ef740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d34b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f72ef740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d34b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f72ef740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d34b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f72ef740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d34b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f72ef760)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d3350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f72ef760)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d3350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:28 compute-0 podman[208208]: 2025-12-03 18:04:28.061665079 +0000 UTC m=+0.191908186 container init 6874e72a57a68a4e32a7614893795556fad38df8ea92d7fd341683099eb6a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:28 compute-0 podman[208208]: 2025-12-03 18:04:28.071523045 +0000 UTC m=+0.201766162 container start 6874e72a57a68a4e32a7614893795556fad38df8ea92d7fd341683099eb6a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562f72ef760)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5562f64d3350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:28 compute-0 flamboyant_murdock[208224]: 167 167
Dec  3 18:04:28 compute-0 systemd[1]: libpod-6874e72a57a68a4e32a7614893795556fad38df8ea92d7fd341683099eb6a0ce.scope: Deactivated successfully.
Dec  3 18:04:28 compute-0 conmon[208224]: conmon 6874e72a57a68a4e32a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6874e72a57a68a4e32a7614893795556fad38df8ea92d7fd341683099eb6a0ce.scope/container/memory.events
Dec  3 18:04:28 compute-0 podman[208208]: 2025-12-03 18:04:28.082148268 +0000 UTC m=+0.212391365 container attach 6874e72a57a68a4e32a7614893795556fad38df8ea92d7fd341683099eb6a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:04:28 compute-0 podman[208208]: 2025-12-03 18:04:28.082944578 +0000 UTC m=+0.213187695 container died 6874e72a57a68a4e32a7614893795556fad38df8ea92d7fd341683099eb6a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c4115f09-5dc7-4349-89f5-f532c1c1fd46
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785068093555, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785068103931, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785068, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4115f09-5dc7-4349-89f5-f532c1c1fd46", "db_session_id": "13YRN684H3IBL1RWATLC", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785068120612, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785068, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4115f09-5dc7-4349-89f5-f532c1c1fd46", "db_session_id": "13YRN684H3IBL1RWATLC", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-866105624e018986533765543d61918b1af2b5d65a45c07cb71d4e4e891fbfe8-merged.mount: Deactivated successfully.
Dec  3 18:04:28 compute-0 ceph-mon[192802]: Deploying daemon osd.2 on compute-0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785068135052, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785068, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4115f09-5dc7-4349-89f5-f532c1c1fd46", "db_session_id": "13YRN684H3IBL1RWATLC", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785068144576, "job": 1, "event": "recovery_finished"}
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  3 18:04:28 compute-0 podman[208208]: 2025-12-03 18:04:28.163904426 +0000 UTC m=+0.294147523 container remove 6874e72a57a68a4e32a7614893795556fad38df8ea92d7fd341683099eb6a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:04:28 compute-0 systemd[1]: libpod-conmon-6874e72a57a68a4e32a7614893795556fad38df8ea92d7fd341683099eb6a0ce.scope: Deactivated successfully.
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5562f6641c00
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: DB pointer 0x5562f73dda00
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec  3 18:04:28 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5562f64d34b0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5562f64d34b0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec  3 18:04:28 compute-0 ceph-osd[207851]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  3 18:04:28 compute-0 ceph-osd[207851]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  3 18:04:28 compute-0 ceph-osd[207851]: _get_class not permitted to load lua
Dec  3 18:04:28 compute-0 ceph-osd[207851]: _get_class not permitted to load sdk
Dec  3 18:04:28 compute-0 ceph-osd[207851]: _get_class not permitted to load test_remote_reads
Dec  3 18:04:28 compute-0 ceph-osd[207851]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  3 18:04:28 compute-0 ceph-osd[207851]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  3 18:04:28 compute-0 ceph-osd[207851]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  3 18:04:28 compute-0 ceph-osd[207851]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  3 18:04:28 compute-0 ceph-osd[207851]: osd.1 0 load_pgs
Dec  3 18:04:28 compute-0 ceph-osd[207851]: osd.1 0 load_pgs opened 0 pgs
Dec  3 18:04:28 compute-0 ceph-osd[207851]: osd.1 0 log_to_monitors true
Dec  3 18:04:28 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1[207847]: 2025-12-03T18:04:28.220+0000 7f954b671740 -1 osd.1 0 log_to_monitors true
Dec  3 18:04:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Dec  3 18:04:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1880366476,v1:192.168.122.100:6807/1880366476]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  3 18:04:28 compute-0 ceph-osd[206694]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.680 iops: 4782.010 elapsed_sec: 0.627
Dec  3 18:04:28 compute-0 ceph-osd[206694]: log_channel(cluster) log [WRN] : OSD bench result of 4782.010009 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 18:04:28 compute-0 ceph-osd[206694]: osd.0 0 waiting for initial osdmap
Dec  3 18:04:28 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0[206640]: 2025-12-03T18:04:28.240+0000 7f21d97df640 -1 osd.0 0 waiting for initial osdmap
Dec  3 18:04:28 compute-0 ceph-osd[206694]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec  3 18:04:28 compute-0 ceph-osd[206694]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec  3 18:04:28 compute-0 ceph-osd[206694]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec  3 18:04:28 compute-0 ceph-osd[206694]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Dec  3 18:04:28 compute-0 ceph-osd[206694]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 18:04:28 compute-0 ceph-osd[206694]: osd.0 8 set_numa_affinity not setting numa affinity
Dec  3 18:04:28 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-0[206640]: 2025-12-03T18:04:28.273+0000 7f21d4e07640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 18:04:28 compute-0 ceph-osd[206694]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec  3 18:04:28 compute-0 podman[208471]: 2025-12-03 18:04:28.442840324 +0000 UTC m=+0.049895866 container create e02545e4ab0433cc06f10b5584bf878c863655696e1c58f387dde520faa6f36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate-test, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:28 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3545763200; not ready for session (expect reconnect)
Dec  3 18:04:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:28 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:28 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 18:04:28 compute-0 systemd[1]: Started libpod-conmon-e02545e4ab0433cc06f10b5584bf878c863655696e1c58f387dde520faa6f36b.scope.
Dec  3 18:04:28 compute-0 podman[208471]: 2025-12-03 18:04:28.425496599 +0000 UTC m=+0.032552161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb3ea4870898b6d3cbc9d1d2f858f4211434a7c7c48a18d39eae678b14bf4e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb3ea4870898b6d3cbc9d1d2f858f4211434a7c7c48a18d39eae678b14bf4e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb3ea4870898b6d3cbc9d1d2f858f4211434a7c7c48a18d39eae678b14bf4e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb3ea4870898b6d3cbc9d1d2f858f4211434a7c7c48a18d39eae678b14bf4e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb3ea4870898b6d3cbc9d1d2f858f4211434a7c7c48a18d39eae678b14bf4e5/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:28 compute-0 podman[208471]: 2025-12-03 18:04:28.588514472 +0000 UTC m=+0.195570064 container init e02545e4ab0433cc06f10b5584bf878c863655696e1c58f387dde520faa6f36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate-test, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:28 compute-0 podman[208471]: 2025-12-03 18:04:28.619411081 +0000 UTC m=+0.226466623 container start e02545e4ab0433cc06f10b5584bf878c863655696e1c58f387dde520faa6f36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate-test, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 18:04:28 compute-0 podman[208471]: 2025-12-03 18:04:28.626506591 +0000 UTC m=+0.233562173 container attach e02545e4ab0433cc06f10b5584bf878c863655696e1c58f387dde520faa6f36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate-test, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 18:04:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec  3 18:04:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 18:04:29 compute-0 ceph-mon[192802]: from='osd.1 [v2:192.168.122.100:6806/1880366476,v1:192.168.122.100:6807/1880366476]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  3 18:04:29 compute-0 ceph-mon[192802]: OSD bench result of 4782.010009 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 18:04:29 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1880366476,v1:192.168.122.100:6807/1880366476]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  3 18:04:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Dec  3 18:04:29 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3545763200,v1:192.168.122.100:6803/3545763200] boot
Dec  3 18:04:29 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  3 18:04:29 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  3 18:04:29 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Dec  3 18:04:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  3 18:04:29 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1880366476,v1:192.168.122.100:6807/1880366476]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 18:04:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  3 18:04:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 18:04:29 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:29 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 18:04:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:29 compute-0 ceph-osd[206694]: osd.0 8 tick checking mon for new map
Dec  3 18:04:29 compute-0 ceph-osd[206694]: osd.0 9 state: booting -> active
Dec  3 18:04:29 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate-test[208486]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  3 18:04:29 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate-test[208486]:                            [--no-systemd] [--no-tmpfs]
Dec  3 18:04:29 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate-test[208486]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  3 18:04:29 compute-0 systemd[1]: libpod-e02545e4ab0433cc06f10b5584bf878c863655696e1c58f387dde520faa6f36b.scope: Deactivated successfully.
Dec  3 18:04:29 compute-0 podman[208471]: 2025-12-03 18:04:29.317040283 +0000 UTC m=+0.924095855 container died e02545e4ab0433cc06f10b5584bf878c863655696e1c58f387dde520faa6f36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate-test, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:04:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-afb3ea4870898b6d3cbc9d1d2f858f4211434a7c7c48a18d39eae678b14bf4e5-merged.mount: Deactivated successfully.
Dec  3 18:04:29 compute-0 podman[208471]: 2025-12-03 18:04:29.393259938 +0000 UTC m=+1.000315480 container remove e02545e4ab0433cc06f10b5584bf878c863655696e1c58f387dde520faa6f36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 18:04:29 compute-0 systemd[1]: libpod-conmon-e02545e4ab0433cc06f10b5584bf878c863655696e1c58f387dde520faa6f36b.scope: Deactivated successfully.
Dec  3 18:04:29 compute-0 podman[158200]: time="2025-12-03T18:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:04:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27362 "" "Go-http-client/1.1"
Dec  3 18:04:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5325 "" "Go-http-client/1.1"
Dec  3 18:04:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  3 18:04:29 compute-0 systemd[1]: Reloading.
Dec  3 18:04:29 compute-0 ceph-mgr[193091]: [devicehealth INFO root] creating mgr pool
Dec  3 18:04:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Dec  3 18:04:29 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  3 18:04:30 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:04:30 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:04:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec  3 18:04:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 18:04:30 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1880366476,v1:192.168.122.100:6807/1880366476]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 18:04:30 compute-0 ceph-osd[207851]: osd.1 0 done with init, starting boot process
Dec  3 18:04:30 compute-0 ceph-osd[207851]: osd.1 0 start_boot
Dec  3 18:04:30 compute-0 ceph-osd[207851]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  3 18:04:30 compute-0 ceph-osd[207851]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  3 18:04:30 compute-0 ceph-osd[207851]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  3 18:04:30 compute-0 ceph-osd[207851]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  3 18:04:30 compute-0 ceph-osd[207851]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec  3 18:04:30 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  3 18:04:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec  3 18:04:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Dec  3 18:04:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  3 18:04:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  3 18:04:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  3 18:04:30 compute-0 ceph-mon[192802]: from='osd.1 [v2:192.168.122.100:6806/1880366476,v1:192.168.122.100:6807/1880366476]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  3 18:04:30 compute-0 ceph-mon[192802]: osd.0 [v2:192.168.122.100:6802/3545763200,v1:192.168.122.100:6803/3545763200] boot
Dec  3 18:04:30 compute-0 ceph-mon[192802]: from='osd.1 [v2:192.168.122.100:6806/1880366476,v1:192.168.122.100:6807/1880366476]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 18:04:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  3 18:04:30 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec  3 18:04:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:30 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:30 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:30 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:30 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Dec  3 18:04:30 compute-0 ceph-osd[206694]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  3 18:04:30 compute-0 ceph-osd[206694]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec  3 18:04:30 compute-0 ceph-osd[206694]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  3 18:04:30 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  3 18:04:30 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1880366476; not ready for session (expect reconnect)
Dec  3 18:04:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:30 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:30 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:30 compute-0 systemd[1]: Reloading.
Dec  3 18:04:30 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:04:30 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:04:30 compute-0 systemd[1]: Starting Ceph osd.2 for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:04:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec  3 18:04:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  3 18:04:31 compute-0 podman[208646]: 2025-12-03 18:04:31.233192369 +0000 UTC m=+0.089986446 container create 3dce2bab55719d99535b95b3abb5e9a0757867c28ae06ff2b229ea51c8ebf61e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:04:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec  3 18:04:31 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec  3 18:04:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:31 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:31 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:31 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:31 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:31 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1880366476; not ready for session (expect reconnect)
Dec  3 18:04:31 compute-0 ceph-mon[192802]: from='osd.1 [v2:192.168.122.100:6806/1880366476,v1:192.168.122.100:6807/1880366476]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 18:04:31 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  3 18:04:31 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  3 18:04:31 compute-0 podman[208646]: 2025-12-03 18:04:31.186677745 +0000 UTC m=+0.043471812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:31 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:31 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d621ab4fa0b9cda78e676f38de825d17da4afc916ccdbcbbb5b05dd62f49e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d621ab4fa0b9cda78e676f38de825d17da4afc916ccdbcbbb5b05dd62f49e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d621ab4fa0b9cda78e676f38de825d17da4afc916ccdbcbbb5b05dd62f49e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d621ab4fa0b9cda78e676f38de825d17da4afc916ccdbcbbb5b05dd62f49e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d621ab4fa0b9cda78e676f38de825d17da4afc916ccdbcbbb5b05dd62f49e5/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:31 compute-0 podman[208646]: 2025-12-03 18:04:31.386385016 +0000 UTC m=+0.243179073 container init 3dce2bab55719d99535b95b3abb5e9a0757867c28ae06ff2b229ea51c8ebf61e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:04:31 compute-0 podman[208646]: 2025-12-03 18:04:31.405283209 +0000 UTC m=+0.262077276 container start 3dce2bab55719d99535b95b3abb5e9a0757867c28ae06ff2b229ea51c8ebf61e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:04:31 compute-0 openstack_network_exporter[160319]: ERROR   18:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:04:31 compute-0 openstack_network_exporter[160319]: ERROR   18:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:04:31 compute-0 openstack_network_exporter[160319]: ERROR   18:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:04:31 compute-0 openstack_network_exporter[160319]: ERROR   18:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:04:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:04:31 compute-0 openstack_network_exporter[160319]: ERROR   18:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:04:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:04:31 compute-0 podman[208646]: 2025-12-03 18:04:31.428881304 +0000 UTC m=+0.285675431 container attach 3dce2bab55719d99535b95b3abb5e9a0757867c28ae06ff2b229ea51c8ebf61e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Dec  3 18:04:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  3 18:04:32 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1880366476; not ready for session (expect reconnect)
Dec  3 18:04:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:32 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:32 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  3 18:04:32 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate[208660]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 18:04:32 compute-0 bash[208646]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 18:04:32 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate[208660]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec  3 18:04:32 compute-0 bash[208646]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec  3 18:04:32 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate[208660]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec  3 18:04:32 compute-0 bash[208646]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec  3 18:04:32 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate[208660]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 18:04:32 compute-0 bash[208646]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 18:04:32 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate[208660]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:32 compute-0 bash[208646]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:32 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate[208660]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 18:04:32 compute-0 bash[208646]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 18:04:32 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate[208660]: --> ceph-volume raw activate successful for osd ID: 2
Dec  3 18:04:32 compute-0 bash[208646]: --> ceph-volume raw activate successful for osd ID: 2
Dec  3 18:04:32 compute-0 systemd[1]: libpod-3dce2bab55719d99535b95b3abb5e9a0757867c28ae06ff2b229ea51c8ebf61e.scope: Deactivated successfully.
Dec  3 18:04:32 compute-0 podman[208646]: 2025-12-03 18:04:32.562644327 +0000 UTC m=+1.419438364 container died 3dce2bab55719d99535b95b3abb5e9a0757867c28ae06ff2b229ea51c8ebf61e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:32 compute-0 systemd[1]: libpod-3dce2bab55719d99535b95b3abb5e9a0757867c28ae06ff2b229ea51c8ebf61e.scope: Consumed 1.171s CPU time.
Dec  3 18:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5d621ab4fa0b9cda78e676f38de825d17da4afc916ccdbcbbb5b05dd62f49e5-merged.mount: Deactivated successfully.
Dec  3 18:04:32 compute-0 podman[208646]: 2025-12-03 18:04:32.737575615 +0000 UTC m=+1.594369652 container remove 3dce2bab55719d99535b95b3abb5e9a0757867c28ae06ff2b229ea51c8ebf61e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2-activate, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:32 compute-0 podman[208813]: 2025-12-03 18:04:32.947643464 +0000 UTC m=+0.106410569 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:04:33 compute-0 podman[208863]: 2025-12-03 18:04:33.0748746 +0000 UTC m=+0.065140970 container create 9f3fd301463eb2f589ec920ec14be6b0634651d166e999ee7010680c8b9618f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:33 compute-0 podman[208863]: 2025-12-03 18:04:33.048341265 +0000 UTC m=+0.038607675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7bc9e53f4030e5b7173b74f623b55c5f9c77fcea5b7ba21ed3c172736e3b2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7bc9e53f4030e5b7173b74f623b55c5f9c77fcea5b7ba21ed3c172736e3b2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7bc9e53f4030e5b7173b74f623b55c5f9c77fcea5b7ba21ed3c172736e3b2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7bc9e53f4030e5b7173b74f623b55c5f9c77fcea5b7ba21ed3c172736e3b2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7bc9e53f4030e5b7173b74f623b55c5f9c77fcea5b7ba21ed3c172736e3b2a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:33 compute-0 podman[208863]: 2025-12-03 18:04:33.199057903 +0000 UTC m=+0.189324293 container init 9f3fd301463eb2f589ec920ec14be6b0634651d166e999ee7010680c8b9618f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 18:04:33 compute-0 podman[208863]: 2025-12-03 18:04:33.216349757 +0000 UTC m=+0.206616127 container start 9f3fd301463eb2f589ec920ec14be6b0634651d166e999ee7010680c8b9618f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 18:04:33 compute-0 bash[208863]: 9f3fd301463eb2f589ec920ec14be6b0634651d166e999ee7010680c8b9618f4
Dec  3 18:04:33 compute-0 systemd[1]: Started Ceph osd.2 for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:04:33 compute-0 ceph-osd[208881]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 18:04:33 compute-0 ceph-osd[208881]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  3 18:04:33 compute-0 ceph-osd[208881]: pidfile_write: ignore empty --pid-file
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab998d1800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab998d1800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab998d1800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab998d1800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a713800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a713800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a713800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a713800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a713800 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 18:04:33 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1880366476; not ready for session (expect reconnect)
Dec  3 18:04:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:33 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab998d1800 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 18:04:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:04:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:04:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:33 compute-0 ceph-osd[208881]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec  3 18:04:33 compute-0 ceph-osd[208881]: load: jerasure load: lrc 
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 18:04:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:33 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 18:04:34 compute-0 ceph-osd[208881]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  3 18:04:34 compute-0 ceph-osd[208881]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a794c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a795400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a795400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a795400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a795400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluefs mount
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluefs mount shared_bdev_used = 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: RocksDB version: 7.9.2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Git sha 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: DB SUMMARY
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: DB Session ID:  CL0777HS1EGP1841EK2F
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: CURRENT file:  CURRENT
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                         Options.error_if_exists: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.create_if_missing: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                                     Options.env: 0x55ab9a765c00
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                                Options.info_log: 0x55ab999588a0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                              Options.statistics: (nil)
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.use_fsync: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                              Options.db_log_dir: 
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.write_buffer_manager: 0x55ab9a86a460
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.unordered_write: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.row_cache: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                              Options.wal_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.two_write_queues: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.wal_compression: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.atomic_flush: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.max_background_jobs: 4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.max_background_compactions: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.max_subcompactions: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.max_open_files: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Compression algorithms supported:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kZSTD supported: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kXpressCompression supported: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kZlibCompression supported: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab999582c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab999582c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab999582c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab999582c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab999582c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab999582c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab999582c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab99945090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab99945090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab99945090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b0ae0352-58bf-4232-b5cd-3560775d3039
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785074205476, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785074206348, "job": 1, "event": "recovery_finished"}
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: freelist init
Dec  3 18:04:34 compute-0 ceph-osd[208881]: freelist _read_cfg
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluefs umount
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a795400 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 18:04:34 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1880366476; not ready for session (expect reconnect)
Dec  3 18:04:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:34 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:34 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:34 compute-0 podman[209235]: 2025-12-03 18:04:34.276661343 +0000 UTC m=+0.059037206 container create 55e0784fde3438075e7d11301b3ff7cbc52a4936095bc99ced00f0a505627ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:04:34 compute-0 systemd[1]: Started libpod-conmon-55e0784fde3438075e7d11301b3ff7cbc52a4936095bc99ced00f0a505627ce2.scope.
Dec  3 18:04:34 compute-0 podman[209235]: 2025-12-03 18:04:34.245390733 +0000 UTC m=+0.027766596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:34 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:34 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:34 compute-0 podman[209235]: 2025-12-03 18:04:34.38933023 +0000 UTC m=+0.171706123 container init 55e0784fde3438075e7d11301b3ff7cbc52a4936095bc99ced00f0a505627ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:04:34 compute-0 podman[209235]: 2025-12-03 18:04:34.401743807 +0000 UTC m=+0.184119670 container start 55e0784fde3438075e7d11301b3ff7cbc52a4936095bc99ced00f0a505627ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dijkstra, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:34 compute-0 podman[209235]: 2025-12-03 18:04:34.407222948 +0000 UTC m=+0.189598811 container attach 55e0784fde3438075e7d11301b3ff7cbc52a4936095bc99ced00f0a505627ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:04:34 compute-0 interesting_dijkstra[209251]: 167 167
Dec  3 18:04:34 compute-0 systemd[1]: libpod-55e0784fde3438075e7d11301b3ff7cbc52a4936095bc99ced00f0a505627ce2.scope: Deactivated successfully.
Dec  3 18:04:34 compute-0 podman[209235]: 2025-12-03 18:04:34.412145476 +0000 UTC m=+0.194521349 container died 55e0784fde3438075e7d11301b3ff7cbc52a4936095bc99ced00f0a505627ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dijkstra, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a795400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a795400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a795400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bdev(0x55ab9a795400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluefs mount
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluefs mount shared_bdev_used = 4718592
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: RocksDB version: 7.9.2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Git sha 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: DB SUMMARY
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: DB Session ID:  CL0777HS1EGP1841EK2E
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: CURRENT file:  CURRENT
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                         Options.error_if_exists: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.create_if_missing: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                                     Options.env: 0x55ab9a9163f0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                                Options.info_log: 0x55ab99958620
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                              Options.statistics: (nil)
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.use_fsync: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                              Options.db_log_dir: 
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.write_buffer_manager: 0x55ab9a86a460
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.unordered_write: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.row_cache: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                              Options.wal_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.two_write_queues: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.wal_compression: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.atomic_flush: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.max_background_jobs: 4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.max_background_compactions: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.max_subcompactions: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.max_open_files: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Compression algorithms supported:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kZSTD supported: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kXpressCompression supported: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kZlibCompression supported: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-89043fdb8c711898c5450b1723b03e8087a1da64bc2f0fe93a0f7cbefe987cde-merged.mount: Deactivated successfully.
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab999451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab99945090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 podman[209235]: 2025-12-03 18:04:34.488392361 +0000 UTC m=+0.270768224 container remove 55e0784fde3438075e7d11301b3ff7cbc52a4936095bc99ced00f0a505627ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab99945090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 systemd[1]: libpod-conmon-55e0784fde3438075e7d11301b3ff7cbc52a4936095bc99ced00f0a505627ce2.scope: Deactivated successfully.
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:           Options.merge_operator: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab99958380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab99945090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.compression: LZ4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.num_levels: 7
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.bloom_locality: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                               Options.ttl: 2592000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                       Options.enable_blob_files: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                           Options.min_blob_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b0ae0352-58bf-4232-b5cd-3560775d3039
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785074500124, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785074513405, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785074, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b0ae0352-58bf-4232-b5cd-3560775d3039", "db_session_id": "CL0777HS1EGP1841EK2E", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785074521314, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785074, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b0ae0352-58bf-4232-b5cd-3560775d3039", "db_session_id": "CL0777HS1EGP1841EK2E", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785074532434, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785074, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b0ae0352-58bf-4232-b5cd-3560775d3039", "db_session_id": "CL0777HS1EGP1841EK2E", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785074535956, "job": 1, "event": "recovery_finished"}
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ab99ab2000
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: DB pointer 0x55ab9a84fa00
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec  3 18:04:34 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab999451f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab999451f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab999451f0#2 capacity: 460.80 MB usag
Dec  3 18:04:34 compute-0 ceph-osd[208881]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  3 18:04:34 compute-0 ceph-osd[208881]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  3 18:04:34 compute-0 ceph-osd[208881]: _get_class not permitted to load lua
Dec  3 18:04:34 compute-0 ceph-osd[208881]: _get_class not permitted to load sdk
Dec  3 18:04:34 compute-0 ceph-osd[208881]: _get_class not permitted to load test_remote_reads
Dec  3 18:04:34 compute-0 ceph-osd[208881]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  3 18:04:34 compute-0 ceph-osd[208881]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  3 18:04:34 compute-0 ceph-osd[208881]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  3 18:04:34 compute-0 ceph-osd[208881]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  3 18:04:34 compute-0 ceph-osd[208881]: osd.2 0 load_pgs
Dec  3 18:04:34 compute-0 ceph-osd[208881]: osd.2 0 load_pgs opened 0 pgs
Dec  3 18:04:34 compute-0 ceph-osd[208881]: osd.2 0 log_to_monitors true
Dec  3 18:04:34 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2[208877]: 2025-12-03T18:04:34.603+0000 7fcda15f7740 -1 osd.2 0 log_to_monitors true
Dec  3 18:04:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Dec  3 18:04:34 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3822832381,v1:192.168.122.100:6811/3822832381]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  3 18:04:34 compute-0 podman[209460]: 2025-12-03 18:04:34.674090927 +0000 UTC m=+0.058979093 container create 91c7caee778d5a1368776b2dc8c9767456f34973bb9e9ac355c02c84e073039c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:34 compute-0 systemd[1]: Started libpod-conmon-91c7caee778d5a1368776b2dc8c9767456f34973bb9e9ac355c02c84e073039c.scope.
Dec  3 18:04:34 compute-0 podman[209460]: 2025-12-03 18:04:34.654211501 +0000 UTC m=+0.039099677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44416a3ca955efbc32cba2b4c20a64d2825beaa5bda1ecec49a23202ad0d6ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44416a3ca955efbc32cba2b4c20a64d2825beaa5bda1ecec49a23202ad0d6ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44416a3ca955efbc32cba2b4c20a64d2825beaa5bda1ecec49a23202ad0d6ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44416a3ca955efbc32cba2b4c20a64d2825beaa5bda1ecec49a23202ad0d6ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:34 compute-0 podman[209460]: 2025-12-03 18:04:34.819193561 +0000 UTC m=+0.204081757 container init 91c7caee778d5a1368776b2dc8c9767456f34973bb9e9ac355c02c84e073039c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:04:34 compute-0 podman[209460]: 2025-12-03 18:04:34.837706024 +0000 UTC m=+0.222594200 container start 91c7caee778d5a1368776b2dc8c9767456f34973bb9e9ac355c02c84e073039c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:34 compute-0 podman[209460]: 2025-12-03 18:04:34.843509563 +0000 UTC m=+0.228397759 container attach 91c7caee778d5a1368776b2dc8c9767456f34973bb9e9ac355c02c84e073039c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 18:04:34 compute-0 ceph-osd[207851]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 17.973 iops: 4601.038 elapsed_sec: 0.652
Dec  3 18:04:34 compute-0 ceph-osd[207851]: log_channel(cluster) log [WRN] : OSD bench result of 4601.037723 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 18:04:34 compute-0 ceph-osd[207851]: osd.1 0 waiting for initial osdmap
Dec  3 18:04:34 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1[207847]: 2025-12-03T18:04:34.843+0000 7f9547e08640 -1 osd.1 0 waiting for initial osdmap
Dec  3 18:04:34 compute-0 ceph-osd[207851]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  3 18:04:34 compute-0 ceph-osd[207851]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  3 18:04:34 compute-0 ceph-osd[207851]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  3 18:04:34 compute-0 ceph-osd[207851]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Dec  3 18:04:34 compute-0 ceph-osd[207851]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 18:04:34 compute-0 ceph-osd[207851]: osd.1 11 set_numa_affinity not setting numa affinity
Dec  3 18:04:34 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-1[207847]: 2025-12-03T18:04:34.869+0000 7f9542c19640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 18:04:34 compute-0 ceph-osd[207851]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Dec  3 18:04:35 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1880366476; not ready for session (expect reconnect)
Dec  3 18:04:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:35 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 18:04:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec  3 18:04:35 compute-0 ceph-mon[192802]: from='osd.2 [v2:192.168.122.100:6810/3822832381,v1:192.168.122.100:6811/3822832381]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  3 18:04:35 compute-0 ceph-mon[192802]: OSD bench result of 4601.037723 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 18:04:35 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3822832381,v1:192.168.122.100:6811/3822832381]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  3 18:04:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Dec  3 18:04:35 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1880366476,v1:192.168.122.100:6807/1880366476] boot
Dec  3 18:04:35 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Dec  3 18:04:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  3 18:04:35 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3822832381,v1:192.168.122.100:6811/3822832381]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 18:04:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  3 18:04:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 18:04:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 18:04:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:35 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:35 compute-0 ceph-osd[207851]: osd.1 12 state: booting -> active
Dec  3 18:04:35 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:04:35 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  3 18:04:35 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  3 18:04:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  3 18:04:35 compute-0 brave_brattain[209504]: {
Dec  3 18:04:35 compute-0 brave_brattain[209504]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "osd_id": 1,
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "type": "bluestore"
Dec  3 18:04:35 compute-0 brave_brattain[209504]:    },
Dec  3 18:04:35 compute-0 brave_brattain[209504]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "osd_id": 2,
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "type": "bluestore"
Dec  3 18:04:35 compute-0 brave_brattain[209504]:    },
Dec  3 18:04:35 compute-0 brave_brattain[209504]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "osd_id": 0,
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:04:35 compute-0 brave_brattain[209504]:        "type": "bluestore"
Dec  3 18:04:35 compute-0 brave_brattain[209504]:    }
Dec  3 18:04:35 compute-0 brave_brattain[209504]: }
Dec  3 18:04:35 compute-0 systemd[1]: libpod-91c7caee778d5a1368776b2dc8c9767456f34973bb9e9ac355c02c84e073039c.scope: Deactivated successfully.
Dec  3 18:04:35 compute-0 systemd[1]: libpod-91c7caee778d5a1368776b2dc8c9767456f34973bb9e9ac355c02c84e073039c.scope: Consumed 1.070s CPU time.
Dec  3 18:04:35 compute-0 podman[209460]: 2025-12-03 18:04:35.908341447 +0000 UTC m=+1.293229633 container died 91c7caee778d5a1368776b2dc8c9767456f34973bb9e9ac355c02c84e073039c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b44416a3ca955efbc32cba2b4c20a64d2825beaa5bda1ecec49a23202ad0d6ce-merged.mount: Deactivated successfully.
Dec  3 18:04:35 compute-0 podman[209460]: 2025-12-03 18:04:35.994717175 +0000 UTC m=+1.379605341 container remove 91c7caee778d5a1368776b2dc8c9767456f34973bb9e9ac355c02c84e073039c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:36 compute-0 systemd[1]: libpod-conmon-91c7caee778d5a1368776b2dc8c9767456f34973bb9e9ac355c02c84e073039c.scope: Deactivated successfully.
Dec  3 18:04:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:04:36 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:04:36 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec  3 18:04:36 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3822832381,v1:192.168.122.100:6811/3822832381]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 18:04:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Dec  3 18:04:36 compute-0 ceph-osd[208881]: osd.2 0 done with init, starting boot process
Dec  3 18:04:36 compute-0 ceph-osd[208881]: osd.2 0 start_boot
Dec  3 18:04:36 compute-0 ceph-osd[208881]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  3 18:04:36 compute-0 ceph-osd[208881]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  3 18:04:36 compute-0 ceph-osd[208881]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  3 18:04:36 compute-0 ceph-osd[208881]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  3 18:04:36 compute-0 ceph-osd[208881]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec  3 18:04:36 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Dec  3 18:04:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:36 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:36 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:36 compute-0 ceph-mon[192802]: from='osd.2 [v2:192.168.122.100:6810/3822832381,v1:192.168.122.100:6811/3822832381]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  3 18:04:36 compute-0 ceph-mon[192802]: osd.1 [v2:192.168.122.100:6806/1880366476,v1:192.168.122.100:6807/1880366476] boot
Dec  3 18:04:36 compute-0 ceph-mon[192802]: from='osd.2 [v2:192.168.122.100:6810/3822832381,v1:192.168.122.100:6811/3822832381]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 18:04:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:36 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3822832381; not ready for session (expect reconnect)
Dec  3 18:04:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:36 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:36 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:36 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:04:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:36 compute-0 ceph-mgr[193091]: [devicehealth INFO root] creating main.db for devicehealth
Dec  3 18:04:37 compute-0 ceph-mgr[193091]: [devicehealth INFO root] Check health
Dec  3 18:04:37 compute-0 ceph-mgr[193091]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec  3 18:04:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  3 18:04:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  3 18:04:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  3 18:04:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  3 18:04:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec  3 18:04:37 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3822832381; not ready for session (expect reconnect)
Dec  3 18:04:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:37 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec  3 18:04:37 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec  3 18:04:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:37 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:37 compute-0 podman[209778]: 2025-12-03 18:04:37.551012043 +0000 UTC m=+0.142004380 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:37 compute-0 ceph-mon[192802]: from='osd.2 [v2:192.168.122.100:6810/3822832381,v1:192.168.122.100:6811/3822832381]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 18:04:37 compute-0 ceph-mon[192802]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  3 18:04:37 compute-0 ceph-mon[192802]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  3 18:04:37 compute-0 podman[209778]: 2025-12-03 18:04:37.644559163 +0000 UTC m=+0.235551520 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 18:04:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  3 18:04:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:04:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:04:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:38 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3822832381; not ready for session (expect reconnect)
Dec  3 18:04:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:38 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:38 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.etccde(active, since 84s)
Dec  3 18:04:38 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:38 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:39 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3822832381; not ready for session (expect reconnect)
Dec  3 18:04:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:39 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:39 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  3 18:04:40 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3822832381; not ready for session (expect reconnect)
Dec  3 18:04:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:40 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:40 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:40 compute-0 podman[210156]: 2025-12-03 18:04:40.626215685 +0000 UTC m=+0.089865912 container create 2adf124b0b1718e57e2356495660d44320d569b2e53618cc36d515c54d88cfff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:40 compute-0 systemd[1]: Started libpod-conmon-2adf124b0b1718e57e2356495660d44320d569b2e53618cc36d515c54d88cfff.scope.
Dec  3 18:04:40 compute-0 podman[210156]: 2025-12-03 18:04:40.59507035 +0000 UTC m=+0.058720597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:40 compute-0 podman[210156]: 2025-12-03 18:04:40.744532008 +0000 UTC m=+0.208182255 container init 2adf124b0b1718e57e2356495660d44320d569b2e53618cc36d515c54d88cfff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:04:40 compute-0 podman[210156]: 2025-12-03 18:04:40.753949754 +0000 UTC m=+0.217599981 container start 2adf124b0b1718e57e2356495660d44320d569b2e53618cc36d515c54d88cfff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:04:40 compute-0 competent_chandrasekhar[210171]: 167 167
Dec  3 18:04:40 compute-0 systemd[1]: libpod-2adf124b0b1718e57e2356495660d44320d569b2e53618cc36d515c54d88cfff.scope: Deactivated successfully.
Dec  3 18:04:40 compute-0 conmon[210171]: conmon 2adf124b0b1718e57e23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2adf124b0b1718e57e2356495660d44320d569b2e53618cc36d515c54d88cfff.scope/container/memory.events
Dec  3 18:04:40 compute-0 podman[210156]: 2025-12-03 18:04:40.764569628 +0000 UTC m=+0.228219855 container attach 2adf124b0b1718e57e2356495660d44320d569b2e53618cc36d515c54d88cfff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:04:40 compute-0 podman[210156]: 2025-12-03 18:04:40.764878796 +0000 UTC m=+0.228529023 container died 2adf124b0b1718e57e2356495660d44320d569b2e53618cc36d515c54d88cfff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 18:04:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-86ecc537011b57357da211f464d5fd3c080984a8021ebb34365bbaea01d67293-merged.mount: Deactivated successfully.
Dec  3 18:04:40 compute-0 podman[210156]: 2025-12-03 18:04:40.839085482 +0000 UTC m=+0.302735709 container remove 2adf124b0b1718e57e2356495660d44320d569b2e53618cc36d515c54d88cfff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:40 compute-0 systemd[1]: libpod-conmon-2adf124b0b1718e57e2356495660d44320d569b2e53618cc36d515c54d88cfff.scope: Deactivated successfully.
Dec  3 18:04:41 compute-0 podman[210198]: 2025-12-03 18:04:41.028322853 +0000 UTC m=+0.064000534 container create 152f75a5185680b6d28b6f39e5b58ce4bafd325cce12be728c7b4c1c9d4aa65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:41 compute-0 systemd[1]: Started libpod-conmon-152f75a5185680b6d28b6f39e5b58ce4bafd325cce12be728c7b4c1c9d4aa65f.scope.
Dec  3 18:04:41 compute-0 podman[210198]: 2025-12-03 18:04:41.003613111 +0000 UTC m=+0.039290822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf3cf932b23e222ae2bbe16f8d5631955efceb81f40caae5d043377310515ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf3cf932b23e222ae2bbe16f8d5631955efceb81f40caae5d043377310515ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf3cf932b23e222ae2bbe16f8d5631955efceb81f40caae5d043377310515ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf3cf932b23e222ae2bbe16f8d5631955efceb81f40caae5d043377310515ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:41 compute-0 ceph-osd[208881]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.613 iops: 4764.827 elapsed_sec: 0.630
Dec  3 18:04:41 compute-0 ceph-osd[208881]: log_channel(cluster) log [WRN] : OSD bench result of 4764.826637 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 18:04:41 compute-0 ceph-osd[208881]: osd.2 0 waiting for initial osdmap
Dec  3 18:04:41 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2[208877]: 2025-12-03T18:04:41.134+0000 7fcd9d577640 -1 osd.2 0 waiting for initial osdmap
Dec  3 18:04:41 compute-0 podman[210198]: 2025-12-03 18:04:41.143822447 +0000 UTC m=+0.179500138 container init 152f75a5185680b6d28b6f39e5b58ce4bafd325cce12be728c7b4c1c9d4aa65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:04:41 compute-0 ceph-osd[208881]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  3 18:04:41 compute-0 ceph-osd[208881]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  3 18:04:41 compute-0 ceph-osd[208881]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  3 18:04:41 compute-0 ceph-osd[208881]: osd.2 14 check_osdmap_features require_osd_release unknown -> reef
Dec  3 18:04:41 compute-0 podman[210198]: 2025-12-03 18:04:41.164064583 +0000 UTC m=+0.199742274 container start 152f75a5185680b6d28b6f39e5b58ce4bafd325cce12be728c7b4c1c9d4aa65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:04:41 compute-0 podman[210198]: 2025-12-03 18:04:41.169249056 +0000 UTC m=+0.204926777 container attach 152f75a5185680b6d28b6f39e5b58ce4bafd325cce12be728c7b4c1c9d4aa65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:41 compute-0 ceph-osd[208881]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 18:04:41 compute-0 ceph-osd[208881]: osd.2 14 set_numa_affinity not setting numa affinity
Dec  3 18:04:41 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-osd-2[208877]: 2025-12-03T18:04:41.168+0000 7fcd98b9f640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 18:04:41 compute-0 ceph-osd[208881]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Dec  3 18:04:41 compute-0 ceph-mgr[193091]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3822832381; not ready for session (expect reconnect)
Dec  3 18:04:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:41 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:41 compute-0 ceph-mgr[193091]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 18:04:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec  3 18:04:41 compute-0 ceph-mon[192802]: OSD bench result of 4764.826637 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 18:04:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Dec  3 18:04:41 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3822832381,v1:192.168.122.100:6811/3822832381] boot
Dec  3 18:04:41 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Dec  3 18:04:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 18:04:41 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 18:04:41 compute-0 ceph-osd[208881]: osd.2 15 state: booting -> active
Dec  3 18:04:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  3 18:04:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec  3 18:04:42 compute-0 ceph-mon[192802]: osd.2 [v2:192.168.122.100:6810/3822832381,v1:192.168.122.100:6811/3822832381] boot
Dec  3 18:04:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Dec  3 18:04:42 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Dec  3 18:04:43 compute-0 charming_neumann[210213]: [
Dec  3 18:04:43 compute-0 charming_neumann[210213]:    {
Dec  3 18:04:43 compute-0 charming_neumann[210213]:        "available": false,
Dec  3 18:04:43 compute-0 charming_neumann[210213]:        "ceph_device": false,
Dec  3 18:04:43 compute-0 charming_neumann[210213]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:        "lsm_data": {},
Dec  3 18:04:43 compute-0 charming_neumann[210213]:        "lvs": [],
Dec  3 18:04:43 compute-0 charming_neumann[210213]:        "path": "/dev/sr0",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:        "rejected_reasons": [
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "Insufficient space (<5GB)",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "Has a FileSystem"
Dec  3 18:04:43 compute-0 charming_neumann[210213]:        ],
Dec  3 18:04:43 compute-0 charming_neumann[210213]:        "sys_api": {
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "actuators": null,
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "device_nodes": "sr0",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "devname": "sr0",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "human_readable_size": "482.00 KB",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "id_bus": "ata",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "model": "QEMU DVD-ROM",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "nr_requests": "2",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "parent": "/dev/sr0",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "partitions": {},
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "path": "/dev/sr0",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "removable": "1",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "rev": "2.5+",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "ro": "0",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "rotational": "1",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "sas_address": "",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "sas_device_handle": "",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "scheduler_mode": "mq-deadline",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "sectors": 0,
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "sectorsize": "2048",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "size": 493568.0,
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "support_discard": "2048",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "type": "disk",
Dec  3 18:04:43 compute-0 charming_neumann[210213]:            "vendor": "QEMU"
Dec  3 18:04:43 compute-0 charming_neumann[210213]:        }
Dec  3 18:04:43 compute-0 charming_neumann[210213]:    }
Dec  3 18:04:43 compute-0 charming_neumann[210213]: ]
Dec  3 18:04:43 compute-0 systemd[1]: libpod-152f75a5185680b6d28b6f39e5b58ce4bafd325cce12be728c7b4c1c9d4aa65f.scope: Deactivated successfully.
Dec  3 18:04:43 compute-0 systemd[1]: libpod-152f75a5185680b6d28b6f39e5b58ce4bafd325cce12be728c7b4c1c9d4aa65f.scope: Consumed 2.305s CPU time.
Dec  3 18:04:43 compute-0 podman[210198]: 2025-12-03 18:04:43.353763725 +0000 UTC m=+2.389441496 container died 152f75a5185680b6d28b6f39e5b58ce4bafd325cce12be728c7b4c1c9d4aa65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:04:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-baf3cf932b23e222ae2bbe16f8d5631955efceb81f40caae5d043377310515ca-merged.mount: Deactivated successfully.
Dec  3 18:04:43 compute-0 podman[210198]: 2025-12-03 18:04:43.435095075 +0000 UTC m=+2.470772766 container remove 152f75a5185680b6d28b6f39e5b58ce4bafd325cce12be728c7b4c1c9d4aa65f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:04:43 compute-0 systemd[1]: libpod-conmon-152f75a5185680b6d28b6f39e5b58ce4bafd325cce12be728c7b4c1c9d4aa65f.scope: Deactivated successfully.
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Dec  3 18:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Dec  3 18:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Dec  3 18:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43688k
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43688k
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 8e17b846-d43b-4276-827a-5c9a12ac7bd7 does not exist
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d0990ac7-f95c-423f-8fe5-e5183d17a6ae does not exist
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9d2b56ec-6737-4f84-8d76-f8c001dac50c does not exist
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mon[192802]: Adjusting osd_memory_target on compute-0 to 43688k
Dec  3 18:04:43 compute-0 ceph-mon[192802]: Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec  3 18:04:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:04:44 compute-0 podman[212614]: 2025-12-03 18:04:44.421475554 +0000 UTC m=+0.064641736 container create 42245adb9a8c34fadb2f310d71a74761a9bcb48330365fc425dedcd17ec6a7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_proskuriakova, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:04:44 compute-0 systemd[1]: Started libpod-conmon-42245adb9a8c34fadb2f310d71a74761a9bcb48330365fc425dedcd17ec6a7e4.scope.
Dec  3 18:04:44 compute-0 podman[212614]: 2025-12-03 18:04:44.397403012 +0000 UTC m=+0.040569274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:44 compute-0 podman[212614]: 2025-12-03 18:04:44.537109015 +0000 UTC m=+0.180275227 container init 42245adb9a8c34fadb2f310d71a74761a9bcb48330365fc425dedcd17ec6a7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 18:04:44 compute-0 podman[212614]: 2025-12-03 18:04:44.548696926 +0000 UTC m=+0.191863148 container start 42245adb9a8c34fadb2f310d71a74761a9bcb48330365fc425dedcd17ec6a7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 18:04:44 compute-0 podman[212614]: 2025-12-03 18:04:44.555261735 +0000 UTC m=+0.198427937 container attach 42245adb9a8c34fadb2f310d71a74761a9bcb48330365fc425dedcd17ec6a7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:04:44 compute-0 vigorous_proskuriakova[212631]: 167 167
Dec  3 18:04:44 compute-0 systemd[1]: libpod-42245adb9a8c34fadb2f310d71a74761a9bcb48330365fc425dedcd17ec6a7e4.scope: Deactivated successfully.
Dec  3 18:04:44 compute-0 podman[212614]: 2025-12-03 18:04:44.557424047 +0000 UTC m=+0.200590259 container died 42245adb9a8c34fadb2f310d71a74761a9bcb48330365fc425dedcd17ec6a7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_proskuriakova, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:04:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3a52cfa0a28e9c5f9cbc0dfd415d7d8266ca1e8fc0c9b5b4a1f0a1b00b9ce49-merged.mount: Deactivated successfully.
Dec  3 18:04:44 compute-0 podman[212614]: 2025-12-03 18:04:44.647966441 +0000 UTC m=+0.291132623 container remove 42245adb9a8c34fadb2f310d71a74761a9bcb48330365fc425dedcd17ec6a7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:44 compute-0 systemd[1]: libpod-conmon-42245adb9a8c34fadb2f310d71a74761a9bcb48330365fc425dedcd17ec6a7e4.scope: Deactivated successfully.
Dec  3 18:04:44 compute-0 podman[212654]: 2025-12-03 18:04:44.835073851 +0000 UTC m=+0.048392652 container create d23d40309eb2490c49e55976b8f16972b7adf9c99ea06dfa6302afc0764da64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 18:04:44 compute-0 systemd[1]: Started libpod-conmon-d23d40309eb2490c49e55976b8f16972b7adf9c99ea06dfa6302afc0764da64d.scope.
Dec  3 18:04:44 compute-0 podman[212654]: 2025-12-03 18:04:44.816185514 +0000 UTC m=+0.029504345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae7e7797f336a19297b3f2070eb38efa852e526c1be6b36e7f2a49446e5be5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae7e7797f336a19297b3f2070eb38efa852e526c1be6b36e7f2a49446e5be5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae7e7797f336a19297b3f2070eb38efa852e526c1be6b36e7f2a49446e5be5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae7e7797f336a19297b3f2070eb38efa852e526c1be6b36e7f2a49446e5be5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae7e7797f336a19297b3f2070eb38efa852e526c1be6b36e7f2a49446e5be5f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:44 compute-0 podman[212654]: 2025-12-03 18:04:44.974856897 +0000 UTC m=+0.188175718 container init d23d40309eb2490c49e55976b8f16972b7adf9c99ea06dfa6302afc0764da64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:04:44 compute-0 podman[212654]: 2025-12-03 18:04:44.996170874 +0000 UTC m=+0.209489675 container start d23d40309eb2490c49e55976b8f16972b7adf9c99ea06dfa6302afc0764da64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 18:04:45 compute-0 podman[212654]: 2025-12-03 18:04:45.001905323 +0000 UTC m=+0.215224164 container attach d23d40309eb2490c49e55976b8f16972b7adf9c99ea06dfa6302afc0764da64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:04:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:04:46 compute-0 frosty_poitras[212671]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:04:46 compute-0 frosty_poitras[212671]: --> relative data size: 1.0
Dec  3 18:04:46 compute-0 frosty_poitras[212671]: --> All data devices are unavailable
Dec  3 18:04:46 compute-0 systemd[1]: libpod-d23d40309eb2490c49e55976b8f16972b7adf9c99ea06dfa6302afc0764da64d.scope: Deactivated successfully.
Dec  3 18:04:46 compute-0 systemd[1]: libpod-d23d40309eb2490c49e55976b8f16972b7adf9c99ea06dfa6302afc0764da64d.scope: Consumed 1.109s CPU time.
Dec  3 18:04:46 compute-0 podman[212654]: 2025-12-03 18:04:46.179038192 +0000 UTC m=+1.392357033 container died d23d40309eb2490c49e55976b8f16972b7adf9c99ea06dfa6302afc0764da64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:04:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-aae7e7797f336a19297b3f2070eb38efa852e526c1be6b36e7f2a49446e5be5f-merged.mount: Deactivated successfully.
Dec  3 18:04:46 compute-0 podman[212654]: 2025-12-03 18:04:46.273602423 +0000 UTC m=+1.486921214 container remove d23d40309eb2490c49e55976b8f16972b7adf9c99ea06dfa6302afc0764da64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:04:46 compute-0 systemd[1]: libpod-conmon-d23d40309eb2490c49e55976b8f16972b7adf9c99ea06dfa6302afc0764da64d.scope: Deactivated successfully.
Dec  3 18:04:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:47 compute-0 podman[212850]: 2025-12-03 18:04:47.269124445 +0000 UTC m=+0.060610449 container create 4f14480f384a754a980721e5186018faf1684ecc8422e273601e439a36363698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:04:47 compute-0 systemd[1]: Started libpod-conmon-4f14480f384a754a980721e5186018faf1684ecc8422e273601e439a36363698.scope.
Dec  3 18:04:47 compute-0 podman[212850]: 2025-12-03 18:04:47.247691466 +0000 UTC m=+0.039177470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:47 compute-0 podman[212850]: 2025-12-03 18:04:47.394629465 +0000 UTC m=+0.186115519 container init 4f14480f384a754a980721e5186018faf1684ecc8422e273601e439a36363698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:04:47 compute-0 podman[212850]: 2025-12-03 18:04:47.40722303 +0000 UTC m=+0.198709064 container start 4f14480f384a754a980721e5186018faf1684ecc8422e273601e439a36363698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:04:47 compute-0 podman[212850]: 2025-12-03 18:04:47.413051821 +0000 UTC m=+0.204537865 container attach 4f14480f384a754a980721e5186018faf1684ecc8422e273601e439a36363698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:04:47 compute-0 keen_curie[212865]: 167 167
Dec  3 18:04:47 compute-0 systemd[1]: libpod-4f14480f384a754a980721e5186018faf1684ecc8422e273601e439a36363698.scope: Deactivated successfully.
Dec  3 18:04:47 compute-0 podman[212850]: 2025-12-03 18:04:47.419409644 +0000 UTC m=+0.210895678 container died 4f14480f384a754a980721e5186018faf1684ecc8422e273601e439a36363698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc56b8cddd39ea26a88ab909d11779c6b0747e9e29588e65cedf1caeff0ffa35-merged.mount: Deactivated successfully.
Dec  3 18:04:47 compute-0 podman[212850]: 2025-12-03 18:04:47.484707876 +0000 UTC m=+0.276193870 container remove 4f14480f384a754a980721e5186018faf1684ecc8422e273601e439a36363698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:04:47 compute-0 systemd[1]: libpod-conmon-4f14480f384a754a980721e5186018faf1684ecc8422e273601e439a36363698.scope: Deactivated successfully.
Dec  3 18:04:47 compute-0 podman[212887]: 2025-12-03 18:04:47.673518269 +0000 UTC m=+0.051272883 container create cb4ea686076e89944345f8a5a033b51aed9f9f8e8da5dce46d5972e14b16c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 18:04:47 compute-0 systemd[1]: Started libpod-conmon-cb4ea686076e89944345f8a5a033b51aed9f9f8e8da5dce46d5972e14b16c76d.scope.
Dec  3 18:04:47 compute-0 podman[212887]: 2025-12-03 18:04:47.650155533 +0000 UTC m=+0.027910177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72c61044082f3a95d77550379f823c54769517d92950a3980f52dd8cc00353c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72c61044082f3a95d77550379f823c54769517d92950a3980f52dd8cc00353c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72c61044082f3a95d77550379f823c54769517d92950a3980f52dd8cc00353c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72c61044082f3a95d77550379f823c54769517d92950a3980f52dd8cc00353c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:47 compute-0 podman[212887]: 2025-12-03 18:04:47.811823389 +0000 UTC m=+0.189578033 container init cb4ea686076e89944345f8a5a033b51aed9f9f8e8da5dce46d5972e14b16c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:04:47 compute-0 podman[212887]: 2025-12-03 18:04:47.837753527 +0000 UTC m=+0.215508141 container start cb4ea686076e89944345f8a5a033b51aed9f9f8e8da5dce46d5972e14b16c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:47 compute-0 podman[212887]: 2025-12-03 18:04:47.842739558 +0000 UTC m=+0.220494212 container attach cb4ea686076e89944345f8a5a033b51aed9f9f8e8da5dce46d5972e14b16c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]: {
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:    "0": [
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:        {
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "devices": [
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "/dev/loop3"
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            ],
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_name": "ceph_lv0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_size": "21470642176",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "name": "ceph_lv0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "tags": {
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.cluster_name": "ceph",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.crush_device_class": "",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.encrypted": "0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.osd_id": "0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.type": "block",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.vdo": "0"
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            },
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "type": "block",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "vg_name": "ceph_vg0"
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:        }
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:    ],
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:    "1": [
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:        {
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "devices": [
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "/dev/loop4"
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            ],
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_name": "ceph_lv1",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_size": "21470642176",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "name": "ceph_lv1",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "tags": {
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.cluster_name": "ceph",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.crush_device_class": "",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.encrypted": "0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.osd_id": "1",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.type": "block",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.vdo": "0"
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            },
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "type": "block",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "vg_name": "ceph_vg1"
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:        }
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:    ],
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:    "2": [
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:        {
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "devices": [
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "/dev/loop5"
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            ],
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_name": "ceph_lv2",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_size": "21470642176",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "name": "ceph_lv2",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "tags": {
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.cluster_name": "ceph",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.crush_device_class": "",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.encrypted": "0",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.osd_id": "2",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.type": "block",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:                "ceph.vdo": "0"
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            },
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "type": "block",
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:            "vg_name": "ceph_vg2"
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:        }
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]:    ]
Dec  3 18:04:48 compute-0 optimistic_cerf[212903]: }
Dec  3 18:04:48 compute-0 systemd[1]: libpod-cb4ea686076e89944345f8a5a033b51aed9f9f8e8da5dce46d5972e14b16c76d.scope: Deactivated successfully.
Dec  3 18:04:48 compute-0 podman[212887]: 2025-12-03 18:04:48.620434993 +0000 UTC m=+0.998189617 container died cb4ea686076e89944345f8a5a033b51aed9f9f8e8da5dce46d5972e14b16c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 18:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a72c61044082f3a95d77550379f823c54769517d92950a3980f52dd8cc00353c-merged.mount: Deactivated successfully.
Dec  3 18:04:48 compute-0 podman[212887]: 2025-12-03 18:04:48.749216042 +0000 UTC m=+1.126970646 container remove cb4ea686076e89944345f8a5a033b51aed9f9f8e8da5dce46d5972e14b16c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec  3 18:04:48 compute-0 systemd[1]: libpod-conmon-cb4ea686076e89944345f8a5a033b51aed9f9f8e8da5dce46d5972e14b16c76d.scope: Deactivated successfully.
Dec  3 18:04:48 compute-0 podman[212920]: 2025-12-03 18:04:48.792496061 +0000 UTC m=+0.131578558 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 18:04:48 compute-0 podman[212926]: 2025-12-03 18:04:48.805576447 +0000 UTC m=+0.121185555 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Dec  3 18:04:48 compute-0 podman[212925]: 2025-12-03 18:04:48.816196975 +0000 UTC m=+0.150566098 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 18:04:48 compute-0 podman[212913]: 2025-12-03 18:04:48.819426503 +0000 UTC m=+0.160086988 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:04:48 compute-0 podman[212922]: 2025-12-03 18:04:48.883133186 +0000 UTC m=+0.218316769 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  3 18:04:48 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:04:49 compute-0 podman[213158]: 2025-12-03 18:04:49.469914568 +0000 UTC m=+0.056184742 container create 1b4921595fca06b33b415dea77544966fee8f5c67a9df7337f694a85da508274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:04:49 compute-0 systemd[1]: Started libpod-conmon-1b4921595fca06b33b415dea77544966fee8f5c67a9df7337f694a85da508274.scope.
Dec  3 18:04:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:49 compute-0 podman[213158]: 2025-12-03 18:04:49.453538831 +0000 UTC m=+0.039809025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:49 compute-0 podman[213158]: 2025-12-03 18:04:49.552247852 +0000 UTC m=+0.138518026 container init 1b4921595fca06b33b415dea77544966fee8f5c67a9df7337f694a85da508274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:04:49 compute-0 podman[213158]: 2025-12-03 18:04:49.561759452 +0000 UTC m=+0.148029626 container start 1b4921595fca06b33b415dea77544966fee8f5c67a9df7337f694a85da508274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:49 compute-0 relaxed_hoover[213176]: 167 167
Dec  3 18:04:49 compute-0 podman[213158]: 2025-12-03 18:04:49.568020854 +0000 UTC m=+0.154291048 container attach 1b4921595fca06b33b415dea77544966fee8f5c67a9df7337f694a85da508274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:04:49 compute-0 systemd[1]: libpod-1b4921595fca06b33b415dea77544966fee8f5c67a9df7337f694a85da508274.scope: Deactivated successfully.
Dec  3 18:04:49 compute-0 podman[213158]: 2025-12-03 18:04:49.569685814 +0000 UTC m=+0.155955988 container died 1b4921595fca06b33b415dea77544966fee8f5c67a9df7337f694a85da508274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:04:49 compute-0 podman[213173]: 2025-12-03 18:04:49.597702963 +0000 UTC m=+0.080510741 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:04:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-794359a700467c55e63da1543e5aafc3622abfc4651be6a2b2f915c84ed17157-merged.mount: Deactivated successfully.
Dec  3 18:04:49 compute-0 podman[213158]: 2025-12-03 18:04:49.628052978 +0000 UTC m=+0.214323172 container remove 1b4921595fca06b33b415dea77544966fee8f5c67a9df7337f694a85da508274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:04:49 compute-0 systemd[1]: libpod-conmon-1b4921595fca06b33b415dea77544966fee8f5c67a9df7337f694a85da508274.scope: Deactivated successfully.
Dec  3 18:04:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:04:49 compute-0 podman[213220]: 2025-12-03 18:04:49.818647564 +0000 UTC m=+0.053527967 container create a922fa22be4f5544470b3464349b036e06a2402df61b2e8652873bcd4a88005e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_banzai, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:04:49 compute-0 systemd[1]: Started libpod-conmon-a922fa22be4f5544470b3464349b036e06a2402df61b2e8652873bcd4a88005e.scope.
Dec  3 18:04:49 compute-0 podman[213220]: 2025-12-03 18:04:49.800812322 +0000 UTC m=+0.035692745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7071a8d458c7daed80be6e5693e9c97de810b1d49c03ac0de5bec1320d0486f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7071a8d458c7daed80be6e5693e9c97de810b1d49c03ac0de5bec1320d0486f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7071a8d458c7daed80be6e5693e9c97de810b1d49c03ac0de5bec1320d0486f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7071a8d458c7daed80be6e5693e9c97de810b1d49c03ac0de5bec1320d0486f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:49 compute-0 podman[213220]: 2025-12-03 18:04:49.945123927 +0000 UTC m=+0.180004360 container init a922fa22be4f5544470b3464349b036e06a2402df61b2e8652873bcd4a88005e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_banzai, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:04:49 compute-0 podman[213220]: 2025-12-03 18:04:49.959609199 +0000 UTC m=+0.194489632 container start a922fa22be4f5544470b3464349b036e06a2402df61b2e8652873bcd4a88005e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_banzai, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:49 compute-0 podman[213220]: 2025-12-03 18:04:49.966664649 +0000 UTC m=+0.201545082 container attach a922fa22be4f5544470b3464349b036e06a2402df61b2e8652873bcd4a88005e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:04:51 compute-0 nice_banzai[213237]: {
Dec  3 18:04:51 compute-0 nice_banzai[213237]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "osd_id": 1,
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "type": "bluestore"
Dec  3 18:04:51 compute-0 nice_banzai[213237]:    },
Dec  3 18:04:51 compute-0 nice_banzai[213237]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "osd_id": 2,
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "type": "bluestore"
Dec  3 18:04:51 compute-0 nice_banzai[213237]:    },
Dec  3 18:04:51 compute-0 nice_banzai[213237]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "osd_id": 0,
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:04:51 compute-0 nice_banzai[213237]:        "type": "bluestore"
Dec  3 18:04:51 compute-0 nice_banzai[213237]:    }
Dec  3 18:04:51 compute-0 nice_banzai[213237]: }
Dec  3 18:04:51 compute-0 systemd[1]: libpod-a922fa22be4f5544470b3464349b036e06a2402df61b2e8652873bcd4a88005e.scope: Deactivated successfully.
Dec  3 18:04:51 compute-0 systemd[1]: libpod-a922fa22be4f5544470b3464349b036e06a2402df61b2e8652873bcd4a88005e.scope: Consumed 1.136s CPU time.
Dec  3 18:04:51 compute-0 podman[213220]: 2025-12-03 18:04:51.093296146 +0000 UTC m=+1.328176549 container died a922fa22be4f5544470b3464349b036e06a2402df61b2e8652873bcd4a88005e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7071a8d458c7daed80be6e5693e9c97de810b1d49c03ac0de5bec1320d0486f9-merged.mount: Deactivated successfully.
Dec  3 18:04:51 compute-0 podman[213220]: 2025-12-03 18:04:51.57653977 +0000 UTC m=+1.811420173 container remove a922fa22be4f5544470b3464349b036e06a2402df61b2e8652873bcd4a88005e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:04:51 compute-0 systemd[1]: libpod-conmon-a922fa22be4f5544470b3464349b036e06a2402df61b2e8652873bcd4a88005e.scope: Deactivated successfully.
Dec  3 18:04:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:04:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:04:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:51 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:04:51 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Dec  3 18:04:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Dec  3 18:04:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Dec  3 18:04:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Dec  3 18:04:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:52 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec  3 18:04:52 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec  3 18:04:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec  3 18:04:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  3 18:04:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec  3 18:04:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  3 18:04:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:04:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:04:52 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  3 18:04:52 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  3 18:04:52 compute-0 podman[213445]: 2025-12-03 18:04:52.849964483 +0000 UTC m=+0.048677131 container create 8616060d1dd2e77b9dd94ee915d984cb6d77bf7e6f9b268018e0621636fd0c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:52 compute-0 ceph-mon[192802]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec  3 18:04:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  3 18:04:52 compute-0 ceph-mon[192802]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  3 18:04:52 compute-0 systemd[1]: Started libpod-conmon-8616060d1dd2e77b9dd94ee915d984cb6d77bf7e6f9b268018e0621636fd0c0b.scope.
Dec  3 18:04:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:52 compute-0 podman[213445]: 2025-12-03 18:04:52.83254684 +0000 UTC m=+0.031259508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:52 compute-0 podman[213445]: 2025-12-03 18:04:52.941777136 +0000 UTC m=+0.140489814 container init 8616060d1dd2e77b9dd94ee915d984cb6d77bf7e6f9b268018e0621636fd0c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:04:52 compute-0 podman[213445]: 2025-12-03 18:04:52.949736519 +0000 UTC m=+0.148449167 container start 8616060d1dd2e77b9dd94ee915d984cb6d77bf7e6f9b268018e0621636fd0c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:52 compute-0 podman[213445]: 2025-12-03 18:04:52.953931201 +0000 UTC m=+0.152643889 container attach 8616060d1dd2e77b9dd94ee915d984cb6d77bf7e6f9b268018e0621636fd0c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:04:52 compute-0 awesome_jang[213461]: 167 167
Dec  3 18:04:52 compute-0 systemd[1]: libpod-8616060d1dd2e77b9dd94ee915d984cb6d77bf7e6f9b268018e0621636fd0c0b.scope: Deactivated successfully.
Dec  3 18:04:52 compute-0 podman[213445]: 2025-12-03 18:04:52.956109624 +0000 UTC m=+0.154822272 container died 8616060d1dd2e77b9dd94ee915d984cb6d77bf7e6f9b268018e0621636fd0c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:04:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c359cac73718183bfdbf3122eac727e2a6fa6d87172161463a5f9ac7dd651b3a-merged.mount: Deactivated successfully.
Dec  3 18:04:52 compute-0 podman[213445]: 2025-12-03 18:04:52.99769646 +0000 UTC m=+0.196409108 container remove 8616060d1dd2e77b9dd94ee915d984cb6d77bf7e6f9b268018e0621636fd0c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:04:53 compute-0 systemd[1]: libpod-conmon-8616060d1dd2e77b9dd94ee915d984cb6d77bf7e6f9b268018e0621636fd0c0b.scope: Deactivated successfully.
Dec  3 18:04:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:04:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:04:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:53 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.etccde (unknown last config time)...
Dec  3 18:04:53 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.etccde (unknown last config time)...
Dec  3 18:04:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.etccde", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec  3 18:04:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.etccde", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  3 18:04:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 18:04:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 18:04:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:04:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:04:53 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.etccde on compute-0
Dec  3 18:04:53 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.etccde on compute-0
Dec  3 18:04:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:04:54 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:54 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:54 compute-0 ceph-mon[192802]: Reconfiguring mgr.compute-0.etccde (unknown last config time)...
Dec  3 18:04:54 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.etccde", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  3 18:04:54 compute-0 ceph-mon[192802]: Reconfiguring daemon mgr.compute-0.etccde on compute-0
Dec  3 18:04:54 compute-0 podman[213594]: 2025-12-03 18:04:54.279633249 +0000 UTC m=+0.033132634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:54 compute-0 podman[213594]: 2025-12-03 18:04:54.437274458 +0000 UTC m=+0.190773823 container create 0488fa595aba207fe858977485fe79d828beeb40c295c347238c25a29e3061f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goldwasser, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 18:04:54 compute-0 systemd[1]: Started libpod-conmon-0488fa595aba207fe858977485fe79d828beeb40c295c347238c25a29e3061f7.scope.
Dec  3 18:04:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:54 compute-0 podman[213594]: 2025-12-03 18:04:54.674063853 +0000 UTC m=+0.427563308 container init 0488fa595aba207fe858977485fe79d828beeb40c295c347238c25a29e3061f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goldwasser, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:04:54 compute-0 podman[213594]: 2025-12-03 18:04:54.695583554 +0000 UTC m=+0.449082959 container start 0488fa595aba207fe858977485fe79d828beeb40c295c347238c25a29e3061f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:54 compute-0 eloquent_goldwasser[213609]: 167 167
Dec  3 18:04:54 compute-0 podman[213594]: 2025-12-03 18:04:54.704942071 +0000 UTC m=+0.458441526 container attach 0488fa595aba207fe858977485fe79d828beeb40c295c347238c25a29e3061f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:54 compute-0 systemd[1]: libpod-0488fa595aba207fe858977485fe79d828beeb40c295c347238c25a29e3061f7.scope: Deactivated successfully.
Dec  3 18:04:54 compute-0 podman[213594]: 2025-12-03 18:04:54.706033266 +0000 UTC m=+0.459532671 container died 0488fa595aba207fe858977485fe79d828beeb40c295c347238c25a29e3061f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8c9a372238a43809b3a4f63865227a8191dfb876f1a48d45a79dfe7581977d-merged.mount: Deactivated successfully.
Dec  3 18:04:54 compute-0 podman[213594]: 2025-12-03 18:04:54.766613944 +0000 UTC m=+0.520113309 container remove 0488fa595aba207fe858977485fe79d828beeb40c295c347238c25a29e3061f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goldwasser, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:04:54 compute-0 systemd[1]: libpod-conmon-0488fa595aba207fe858977485fe79d828beeb40c295c347238c25a29e3061f7.scope: Deactivated successfully.
Dec  3 18:04:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:04:54 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:04:54 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:55 compute-0 python3[213730]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:04:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:55 compute-0 podman[213754]: 2025-12-03 18:04:55.387674136 +0000 UTC m=+0.079287061 container create b1aec102123cc9d83e7828d384b437f2d77dab823000cc775fae339813a664d7 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:04:55 compute-0 systemd[1]: Started libpod-conmon-b1aec102123cc9d83e7828d384b437f2d77dab823000cc775fae339813a664d7.scope.
Dec  3 18:04:55 compute-0 podman[213754]: 2025-12-03 18:04:55.36225199 +0000 UTC m=+0.053864935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:04:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea48135070ae30c7bfc400f3d0ab2ec033fcb819c0a687e1d49718094332f423/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea48135070ae30c7bfc400f3d0ab2ec033fcb819c0a687e1d49718094332f423/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea48135070ae30c7bfc400f3d0ab2ec033fcb819c0a687e1d49718094332f423/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:55 compute-0 podman[213754]: 2025-12-03 18:04:55.501035241 +0000 UTC m=+0.192648176 container init b1aec102123cc9d83e7828d384b437f2d77dab823000cc775fae339813a664d7 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:04:55 compute-0 podman[213754]: 2025-12-03 18:04:55.514862106 +0000 UTC m=+0.206475061 container start b1aec102123cc9d83e7828d384b437f2d77dab823000cc775fae339813a664d7 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:04:55 compute-0 podman[213754]: 2025-12-03 18:04:55.523086286 +0000 UTC m=+0.214699221 container attach b1aec102123cc9d83e7828d384b437f2d77dab823000cc775fae339813a664d7 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 18:04:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:04:55 compute-0 podman[213861]: 2025-12-03 18:04:55.988328633 +0000 UTC m=+0.105582857 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:56 compute-0 podman[213861]: 2025-12-03 18:04:56.150374688 +0000 UTC m=+0.267628922 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 18:04:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  3 18:04:56 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1151659692' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  3 18:04:56 compute-0 hardcore_varahamihira[213792]: 
Dec  3 18:04:56 compute-0 hardcore_varahamihira[213792]: {"fsid":"c1caf3ba-b2a5-5005-a11e-e955c344dccc","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":149,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1764785081,"num_in_osds":3,"osd_in_since":1764785045,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502738944,"bytes_avail":63909187584,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-03T18:04:15.769428+0000","services":{}},"progress_events":{}}
Dec  3 18:04:56 compute-0 systemd[1]: libpod-b1aec102123cc9d83e7828d384b437f2d77dab823000cc775fae339813a664d7.scope: Deactivated successfully.
Dec  3 18:04:56 compute-0 podman[213754]: 2025-12-03 18:04:56.197973441 +0000 UTC m=+0.889586356 container died b1aec102123cc9d83e7828d384b437f2d77dab823000cc775fae339813a664d7 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea48135070ae30c7bfc400f3d0ab2ec033fcb819c0a687e1d49718094332f423-merged.mount: Deactivated successfully.
Dec  3 18:04:56 compute-0 podman[213754]: 2025-12-03 18:04:56.252577244 +0000 UTC m=+0.944190169 container remove b1aec102123cc9d83e7828d384b437f2d77dab823000cc775fae339813a664d7 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:04:56 compute-0 systemd[1]: libpod-conmon-b1aec102123cc9d83e7828d384b437f2d77dab823000cc775fae339813a664d7.scope: Deactivated successfully.
Dec  3 18:04:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:04:56 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:04:56 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:56 compute-0 python3[214007]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:04:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:04:56 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:04:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:04:56 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:04:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:04:56 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:56 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c4628fb9-c3e8-44f1-8f78-1c18a9d9ac08 does not exist
Dec  3 18:04:56 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 53a3a82f-ea8f-4bb1-b080-d950fc148f78 does not exist
Dec  3 18:04:56 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b4d4213d-fa44-4763-a87c-0be7f4b540ed does not exist
Dec  3 18:04:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:04:56 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:04:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:04:56 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:04:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:04:56 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:04:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:04:56 compute-0 podman[214017]: 2025-12-03 18:04:56.857976306 +0000 UTC m=+0.048050244 container create 4316c658534b30a77ec1195a6038143e83ee4ed37b93b897e828d354b47e7dcf (image=quay.io/ceph/ceph:v18, name=friendly_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:04:56 compute-0 systemd[1]: Started libpod-conmon-4316c658534b30a77ec1195a6038143e83ee4ed37b93b897e828d354b47e7dcf.scope.
Dec  3 18:04:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80ad911332d7d8c47e0e284ff26666c5fa0e9fff4052c726d78da6fd3b6da00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80ad911332d7d8c47e0e284ff26666c5fa0e9fff4052c726d78da6fd3b6da00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:56 compute-0 podman[214017]: 2025-12-03 18:04:56.841698503 +0000 UTC m=+0.031772451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:04:56 compute-0 podman[214017]: 2025-12-03 18:04:56.965848639 +0000 UTC m=+0.155922607 container init 4316c658534b30a77ec1195a6038143e83ee4ed37b93b897e828d354b47e7dcf (image=quay.io/ceph/ceph:v18, name=friendly_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:04:56 compute-0 podman[214017]: 2025-12-03 18:04:56.975327799 +0000 UTC m=+0.165401757 container start 4316c658534b30a77ec1195a6038143e83ee4ed37b93b897e828d354b47e7dcf (image=quay.io/ceph/ceph:v18, name=friendly_cori, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:56 compute-0 podman[214017]: 2025-12-03 18:04:56.981053507 +0000 UTC m=+0.171127455 container attach 4316c658534b30a77ec1195a6038143e83ee4ed37b93b897e828d354b47e7dcf (image=quay.io/ceph/ceph:v18, name=friendly_cori, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:04:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:04:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:04:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 18:04:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1341612024' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:04:57 compute-0 podman[214189]: 2025-12-03 18:04:57.644888205 +0000 UTC m=+0.081080374 container create d630753c8ca574c46bb60ba117276732067f869e6a5c38f232474aeece29f8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:04:57 compute-0 podman[214189]: 2025-12-03 18:04:57.610851171 +0000 UTC m=+0.047043390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:57 compute-0 systemd[1]: Started libpod-conmon-d630753c8ca574c46bb60ba117276732067f869e6a5c38f232474aeece29f8a2.scope.
Dec  3 18:04:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:57 compute-0 podman[214189]: 2025-12-03 18:04:57.779797113 +0000 UTC m=+0.215989302 container init d630753c8ca574c46bb60ba117276732067f869e6a5c38f232474aeece29f8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 18:04:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:04:57 compute-0 podman[214189]: 2025-12-03 18:04:57.789032266 +0000 UTC m=+0.225224435 container start d630753c8ca574c46bb60ba117276732067f869e6a5c38f232474aeece29f8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:04:57 compute-0 podman[214189]: 2025-12-03 18:04:57.795311119 +0000 UTC m=+0.231503288 container attach d630753c8ca574c46bb60ba117276732067f869e6a5c38f232474aeece29f8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:04:57 compute-0 distracted_tesla[214208]: 167 167
Dec  3 18:04:57 compute-0 systemd[1]: libpod-d630753c8ca574c46bb60ba117276732067f869e6a5c38f232474aeece29f8a2.scope: Deactivated successfully.
Dec  3 18:04:57 compute-0 podman[214189]: 2025-12-03 18:04:57.797554093 +0000 UTC m=+0.233746232 container died d630753c8ca574c46bb60ba117276732067f869e6a5c38f232474aeece29f8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:04:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-77c7243a2a115e194c33b636b1636ea6b61e75635a47b14f1b00f46b6e55330e-merged.mount: Deactivated successfully.
Dec  3 18:04:57 compute-0 podman[214189]: 2025-12-03 18:04:57.864155356 +0000 UTC m=+0.300347485 container remove d630753c8ca574c46bb60ba117276732067f869e6a5c38f232474aeece29f8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:04:57 compute-0 systemd[1]: libpod-conmon-d630753c8ca574c46bb60ba117276732067f869e6a5c38f232474aeece29f8a2.scope: Deactivated successfully.
Dec  3 18:04:58 compute-0 podman[214230]: 2025-12-03 18:04:58.057246583 +0000 UTC m=+0.073882631 container create f74dc7b820febece5df30941f312b3743b939123376a79996113f5e67aff0f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_robinson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:04:58 compute-0 podman[214230]: 2025-12-03 18:04:58.020181755 +0000 UTC m=+0.036817893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:04:58 compute-0 systemd[1]: Started libpod-conmon-f74dc7b820febece5df30941f312b3743b939123376a79996113f5e67aff0f0d.scope.
Dec  3 18:04:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea6121232524592b9eba5eb5d4168650f2105d09b1df21bd01efaf0558f9eab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea6121232524592b9eba5eb5d4168650f2105d09b1df21bd01efaf0558f9eab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea6121232524592b9eba5eb5d4168650f2105d09b1df21bd01efaf0558f9eab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea6121232524592b9eba5eb5d4168650f2105d09b1df21bd01efaf0558f9eab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea6121232524592b9eba5eb5d4168650f2105d09b1df21bd01efaf0558f9eab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:58 compute-0 podman[214230]: 2025-12-03 18:04:58.219087363 +0000 UTC m=+0.235723441 container init f74dc7b820febece5df30941f312b3743b939123376a79996113f5e67aff0f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:04:58 compute-0 podman[214230]: 2025-12-03 18:04:58.234752752 +0000 UTC m=+0.251388820 container start f74dc7b820febece5df30941f312b3743b939123376a79996113f5e67aff0f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_robinson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:58 compute-0 podman[214230]: 2025-12-03 18:04:58.240150203 +0000 UTC m=+0.256786281 container attach f74dc7b820febece5df30941f312b3743b939123376a79996113f5e67aff0f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_robinson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:04:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec  3 18:04:58 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1341612024' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:04:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1341612024' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:04:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec  3 18:04:58 compute-0 friendly_cori[214053]: pool 'vms' created
Dec  3 18:04:58 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec  3 18:04:58 compute-0 systemd[1]: libpod-4316c658534b30a77ec1195a6038143e83ee4ed37b93b897e828d354b47e7dcf.scope: Deactivated successfully.
Dec  3 18:04:58 compute-0 conmon[214053]: conmon 4316c658534b30a77ec1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4316c658534b30a77ec1195a6038143e83ee4ed37b93b897e828d354b47e7dcf.scope/container/memory.events
Dec  3 18:04:58 compute-0 podman[214252]: 2025-12-03 18:04:58.476697863 +0000 UTC m=+0.041053746 container died 4316c658534b30a77ec1195a6038143e83ee4ed37b93b897e828d354b47e7dcf (image=quay.io/ceph/ceph:v18, name=friendly_cori, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a80ad911332d7d8c47e0e284ff26666c5fa0e9fff4052c726d78da6fd3b6da00-merged.mount: Deactivated successfully.
Dec  3 18:04:58 compute-0 podman[214252]: 2025-12-03 18:04:58.551883353 +0000 UTC m=+0.116239176 container remove 4316c658534b30a77ec1195a6038143e83ee4ed37b93b897e828d354b47e7dcf (image=quay.io/ceph/ceph:v18, name=friendly_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 18:04:58 compute-0 systemd[1]: libpod-conmon-4316c658534b30a77ec1195a6038143e83ee4ed37b93b897e828d354b47e7dcf.scope: Deactivated successfully.
Dec  3 18:04:58 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:04:59 compute-0 python3[214292]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:04:59 compute-0 podman[214300]: 2025-12-03 18:04:59.107829368 +0000 UTC m=+0.091471386 container create 9ecef91beb56d7c38c88774f1d7e2e2ec69b39e67af900453ff29dbfec0bfa22 (image=quay.io/ceph/ceph:v18, name=busy_cohen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:04:59 compute-0 podman[214300]: 2025-12-03 18:04:59.0740741 +0000 UTC m=+0.057716168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:04:59 compute-0 systemd[1]: Started libpod-conmon-9ecef91beb56d7c38c88774f1d7e2e2ec69b39e67af900453ff29dbfec0bfa22.scope.
Dec  3 18:04:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5deebd025e78f4365562ab58141610563b7f9b001d0e628780bb6fc9a104cd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5deebd025e78f4365562ab58141610563b7f9b001d0e628780bb6fc9a104cd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:04:59 compute-0 podman[214300]: 2025-12-03 18:04:59.26275336 +0000 UTC m=+0.246395468 container init 9ecef91beb56d7c38c88774f1d7e2e2ec69b39e67af900453ff29dbfec0bfa22 (image=quay.io/ceph/ceph:v18, name=busy_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:04:59 compute-0 podman[214300]: 2025-12-03 18:04:59.271992135 +0000 UTC m=+0.255634143 container start 9ecef91beb56d7c38c88774f1d7e2e2ec69b39e67af900453ff29dbfec0bfa22 (image=quay.io/ceph/ceph:v18, name=busy_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:04:59 compute-0 podman[214300]: 2025-12-03 18:04:59.276866142 +0000 UTC m=+0.260508190 container attach 9ecef91beb56d7c38c88774f1d7e2e2ec69b39e67af900453ff29dbfec0bfa22 (image=quay.io/ceph/ceph:v18, name=busy_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:04:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec  3 18:04:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec  3 18:04:59 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec  3 18:04:59 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1341612024' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:04:59 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:04:59 compute-0 funny_robinson[214246]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:04:59 compute-0 funny_robinson[214246]: --> relative data size: 1.0
Dec  3 18:04:59 compute-0 funny_robinson[214246]: --> All data devices are unavailable
Dec  3 18:04:59 compute-0 systemd[1]: libpod-f74dc7b820febece5df30941f312b3743b939123376a79996113f5e67aff0f0d.scope: Deactivated successfully.
Dec  3 18:04:59 compute-0 systemd[1]: libpod-f74dc7b820febece5df30941f312b3743b939123376a79996113f5e67aff0f0d.scope: Consumed 1.139s CPU time.
Dec  3 18:04:59 compute-0 podman[214230]: 2025-12-03 18:04:59.466089125 +0000 UTC m=+1.482725203 container died f74dc7b820febece5df30941f312b3743b939123376a79996113f5e67aff0f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:04:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ea6121232524592b9eba5eb5d4168650f2105d09b1df21bd01efaf0558f9eab-merged.mount: Deactivated successfully.
Dec  3 18:04:59 compute-0 podman[214230]: 2025-12-03 18:04:59.586068041 +0000 UTC m=+1.602704099 container remove f74dc7b820febece5df30941f312b3743b939123376a79996113f5e67aff0f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_robinson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:04:59 compute-0 systemd[1]: libpod-conmon-f74dc7b820febece5df30941f312b3743b939123376a79996113f5e67aff0f0d.scope: Deactivated successfully.
Dec  3 18:04:59 compute-0 podman[158200]: time="2025-12-03T18:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:04:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30682 "" "Go-http-client/1.1"
Dec  3 18:04:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v62: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:04:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6242 "" "Go-http-client/1.1"
Dec  3 18:04:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 18:04:59 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1109244179' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:05:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec  3 18:05:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 18:05:00 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1109244179' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:05:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec  3 18:05:00 compute-0 busy_cohen[214321]: pool 'volumes' created
Dec  3 18:05:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec  3 18:05:00 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1109244179' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:05:00 compute-0 systemd[1]: libpod-9ecef91beb56d7c38c88774f1d7e2e2ec69b39e67af900453ff29dbfec0bfa22.scope: Deactivated successfully.
Dec  3 18:05:00 compute-0 podman[214300]: 2025-12-03 18:05:00.465546992 +0000 UTC m=+1.449189000 container died 9ecef91beb56d7c38c88774f1d7e2e2ec69b39e67af900453ff29dbfec0bfa22 (image=quay.io/ceph/ceph:v18, name=busy_cohen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 18:05:00 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b5deebd025e78f4365562ab58141610563b7f9b001d0e628780bb6fc9a104cd-merged.mount: Deactivated successfully.
Dec  3 18:05:00 compute-0 podman[214300]: 2025-12-03 18:05:00.516950837 +0000 UTC m=+1.500592845 container remove 9ecef91beb56d7c38c88774f1d7e2e2ec69b39e67af900453ff29dbfec0bfa22 (image=quay.io/ceph/ceph:v18, name=busy_cohen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 18:05:00 compute-0 systemd[1]: libpod-conmon-9ecef91beb56d7c38c88774f1d7e2e2ec69b39e67af900453ff29dbfec0bfa22.scope: Deactivated successfully.
Dec  3 18:05:00 compute-0 podman[214513]: 2025-12-03 18:05:00.566500367 +0000 UTC m=+0.059850531 container create 19d268f3d718d5876edfa9acf541c4b9652f26b6ab429b36b20d0fad7f114b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:05:00 compute-0 systemd[1]: Started libpod-conmon-19d268f3d718d5876edfa9acf541c4b9652f26b6ab429b36b20d0fad7f114b7b.scope.
Dec  3 18:05:00 compute-0 podman[214513]: 2025-12-03 18:05:00.545627321 +0000 UTC m=+0.038977535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:00 compute-0 podman[214513]: 2025-12-03 18:05:00.674534353 +0000 UTC m=+0.167884537 container init 19d268f3d718d5876edfa9acf541c4b9652f26b6ab429b36b20d0fad7f114b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:00 compute-0 podman[214513]: 2025-12-03 18:05:00.684773302 +0000 UTC m=+0.178123476 container start 19d268f3d718d5876edfa9acf541c4b9652f26b6ab429b36b20d0fad7f114b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:05:00 compute-0 podman[214513]: 2025-12-03 18:05:00.689318402 +0000 UTC m=+0.182668566 container attach 19d268f3d718d5876edfa9acf541c4b9652f26b6ab429b36b20d0fad7f114b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:05:00 compute-0 practical_hermann[214530]: 167 167
Dec  3 18:05:00 compute-0 systemd[1]: libpod-19d268f3d718d5876edfa9acf541c4b9652f26b6ab429b36b20d0fad7f114b7b.scope: Deactivated successfully.
Dec  3 18:05:00 compute-0 podman[214513]: 2025-12-03 18:05:00.69297314 +0000 UTC m=+0.186323334 container died 19d268f3d718d5876edfa9acf541c4b9652f26b6ab429b36b20d0fad7f114b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 18:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-48f69754b0942f86fce6cc6e306f1d36ba0370aacf8fa5ce9eebe2d67415ed89-merged.mount: Deactivated successfully.
Dec  3 18:05:00 compute-0 podman[214513]: 2025-12-03 18:05:00.769192856 +0000 UTC m=+0.262543030 container remove 19d268f3d718d5876edfa9acf541c4b9652f26b6ab429b36b20d0fad7f114b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:05:00 compute-0 systemd[1]: libpod-conmon-19d268f3d718d5876edfa9acf541c4b9652f26b6ab429b36b20d0fad7f114b7b.scope: Deactivated successfully.
Dec  3 18:05:00 compute-0 python3[214570]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:00 compute-0 podman[214578]: 2025-12-03 18:05:00.992571776 +0000 UTC m=+0.080355737 container create 2181e192989d6b8ba9dc39eac4044a9e1a71993bf930d579a8c8e9d3ccea209c (image=quay.io/ceph/ceph:v18, name=clever_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:05:01 compute-0 podman[214579]: 2025-12-03 18:05:01.002697272 +0000 UTC m=+0.089679093 container create 1523a46153f1694178d9d2311ec2a74b207cb2acb70c0fd24f36d957ea80b3e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:05:01 compute-0 podman[214579]: 2025-12-03 18:05:00.948712394 +0000 UTC m=+0.035694265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:01 compute-0 podman[214578]: 2025-12-03 18:05:00.958945342 +0000 UTC m=+0.046729373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:01 compute-0 systemd[1]: Started libpod-conmon-2181e192989d6b8ba9dc39eac4044a9e1a71993bf930d579a8c8e9d3ccea209c.scope.
Dec  3 18:05:01 compute-0 systemd[1]: Started libpod-conmon-1523a46153f1694178d9d2311ec2a74b207cb2acb70c0fd24f36d957ea80b3e6.scope.
Dec  3 18:05:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4a2b60daa4b11038e69bd36dbe5596a9f9b10c0c4309e253ba50a7a0e87c53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4a2b60daa4b11038e69bd36dbe5596a9f9b10c0c4309e253ba50a7a0e87c53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53d004c2eb48636cf5c8f07a58c432826f136a8d000f6392ca87f01169220b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53d004c2eb48636cf5c8f07a58c432826f136a8d000f6392ca87f01169220b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53d004c2eb48636cf5c8f07a58c432826f136a8d000f6392ca87f01169220b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53d004c2eb48636cf5c8f07a58c432826f136a8d000f6392ca87f01169220b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:01 compute-0 podman[214578]: 2025-12-03 18:05:01.11864897 +0000 UTC m=+0.206432931 container init 2181e192989d6b8ba9dc39eac4044a9e1a71993bf930d579a8c8e9d3ccea209c (image=quay.io/ceph/ceph:v18, name=clever_hoover, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:05:01 compute-0 podman[214578]: 2025-12-03 18:05:01.127887464 +0000 UTC m=+0.215671405 container start 2181e192989d6b8ba9dc39eac4044a9e1a71993bf930d579a8c8e9d3ccea209c (image=quay.io/ceph/ceph:v18, name=clever_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:01 compute-0 podman[214579]: 2025-12-03 18:05:01.130509237 +0000 UTC m=+0.217491078 container init 1523a46153f1694178d9d2311ec2a74b207cb2acb70c0fd24f36d957ea80b3e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 18:05:01 compute-0 podman[214578]: 2025-12-03 18:05:01.135747145 +0000 UTC m=+0.223531086 container attach 2181e192989d6b8ba9dc39eac4044a9e1a71993bf930d579a8c8e9d3ccea209c (image=quay.io/ceph/ceph:v18, name=clever_hoover, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:01 compute-0 podman[214579]: 2025-12-03 18:05:01.149183669 +0000 UTC m=+0.236165490 container start 1523a46153f1694178d9d2311ec2a74b207cb2acb70c0fd24f36d957ea80b3e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:05:01 compute-0 podman[214579]: 2025-12-03 18:05:01.155300608 +0000 UTC m=+0.242282439 container attach 1523a46153f1694178d9d2311ec2a74b207cb2acb70c0fd24f36d957ea80b3e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:01 compute-0 openstack_network_exporter[160319]: ERROR   18:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:05:01 compute-0 openstack_network_exporter[160319]: ERROR   18:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:05:01 compute-0 openstack_network_exporter[160319]: ERROR   18:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:05:01 compute-0 openstack_network_exporter[160319]: ERROR   18:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:05:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:05:01 compute-0 openstack_network_exporter[160319]: ERROR   18:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:05:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:05:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec  3 18:05:01 compute-0 ceph-mon[192802]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 18:05:01 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1109244179' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:05:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec  3 18:05:01 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec  3 18:05:01 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 18:05:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1270012011' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:05:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v65: 3 pgs: 1 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]: {
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:    "0": [
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:        {
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "devices": [
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "/dev/loop3"
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            ],
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_name": "ceph_lv0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_size": "21470642176",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "name": "ceph_lv0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "tags": {
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.cluster_name": "ceph",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.crush_device_class": "",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.encrypted": "0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.osd_id": "0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.type": "block",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.vdo": "0"
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            },
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "type": "block",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "vg_name": "ceph_vg0"
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:        }
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:    ],
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:    "1": [
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:        {
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "devices": [
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "/dev/loop4"
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            ],
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_name": "ceph_lv1",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_size": "21470642176",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "name": "ceph_lv1",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "tags": {
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.cluster_name": "ceph",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.crush_device_class": "",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.encrypted": "0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.osd_id": "1",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.type": "block",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.vdo": "0"
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            },
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "type": "block",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "vg_name": "ceph_vg1"
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:        }
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:    ],
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:    "2": [
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:        {
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "devices": [
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "/dev/loop5"
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            ],
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_name": "ceph_lv2",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_size": "21470642176",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "name": "ceph_lv2",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "tags": {
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.cluster_name": "ceph",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.crush_device_class": "",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.encrypted": "0",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.osd_id": "2",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.type": "block",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:                "ceph.vdo": "0"
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            },
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "type": "block",
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:            "vg_name": "ceph_vg2"
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:        }
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]:    ]
Dec  3 18:05:01 compute-0 thirsty_swirles[214611]: }
Dec  3 18:05:01 compute-0 systemd[1]: libpod-1523a46153f1694178d9d2311ec2a74b207cb2acb70c0fd24f36d957ea80b3e6.scope: Deactivated successfully.
Dec  3 18:05:01 compute-0 podman[214579]: 2025-12-03 18:05:01.930870363 +0000 UTC m=+1.017852194 container died 1523a46153f1694178d9d2311ec2a74b207cb2acb70c0fd24f36d957ea80b3e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 18:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f53d004c2eb48636cf5c8f07a58c432826f136a8d000f6392ca87f01169220b3-merged.mount: Deactivated successfully.
Dec  3 18:05:02 compute-0 podman[214579]: 2025-12-03 18:05:02.023776352 +0000 UTC m=+1.110758183 container remove 1523a46153f1694178d9d2311ec2a74b207cb2acb70c0fd24f36d957ea80b3e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:02 compute-0 systemd[1]: libpod-conmon-1523a46153f1694178d9d2311ec2a74b207cb2acb70c0fd24f36d957ea80b3e6.scope: Deactivated successfully.
Dec  3 18:05:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec  3 18:05:02 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1270012011' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:05:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec  3 18:05:02 compute-0 clever_hoover[214609]: pool 'backups' created
Dec  3 18:05:02 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec  3 18:05:02 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1270012011' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:05:02 compute-0 systemd[1]: libpod-2181e192989d6b8ba9dc39eac4044a9e1a71993bf930d579a8c8e9d3ccea209c.scope: Deactivated successfully.
Dec  3 18:05:02 compute-0 podman[214578]: 2025-12-03 18:05:02.49437913 +0000 UTC m=+1.582163131 container died 2181e192989d6b8ba9dc39eac4044a9e1a71993bf930d579a8c8e9d3ccea209c (image=quay.io/ceph/ceph:v18, name=clever_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd4a2b60daa4b11038e69bd36dbe5596a9f9b10c0c4309e253ba50a7a0e87c53-merged.mount: Deactivated successfully.
Dec  3 18:05:02 compute-0 podman[214578]: 2025-12-03 18:05:02.562186342 +0000 UTC m=+1.649970283 container remove 2181e192989d6b8ba9dc39eac4044a9e1a71993bf930d579a8c8e9d3ccea209c (image=quay.io/ceph/ceph:v18, name=clever_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:05:02 compute-0 systemd[1]: libpod-conmon-2181e192989d6b8ba9dc39eac4044a9e1a71993bf930d579a8c8e9d3ccea209c.scope: Deactivated successfully.
Dec  3 18:05:02 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:02 compute-0 podman[214834]: 2025-12-03 18:05:02.845392472 +0000 UTC m=+0.050736461 container create 8ddecd873c5b1f99c84b168988213281828e98b7175153c6bd67af18b91a9c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:05:02 compute-0 systemd[1]: Started libpod-conmon-8ddecd873c5b1f99c84b168988213281828e98b7175153c6bd67af18b91a9c7b.scope.
Dec  3 18:05:02 compute-0 python3[214833]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:02 compute-0 podman[214834]: 2025-12-03 18:05:02.820740955 +0000 UTC m=+0.026084944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:02 compute-0 podman[214834]: 2025-12-03 18:05:02.953218314 +0000 UTC m=+0.158562303 container init 8ddecd873c5b1f99c84b168988213281828e98b7175153c6bd67af18b91a9c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:05:02 compute-0 podman[214834]: 2025-12-03 18:05:02.962713874 +0000 UTC m=+0.168057843 container start 8ddecd873c5b1f99c84b168988213281828e98b7175153c6bd67af18b91a9c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:02 compute-0 vibrant_satoshi[214850]: 167 167
Dec  3 18:05:02 compute-0 podman[214834]: 2025-12-03 18:05:02.968721689 +0000 UTC m=+0.174065688 container attach 8ddecd873c5b1f99c84b168988213281828e98b7175153c6bd67af18b91a9c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:05:02 compute-0 systemd[1]: libpod-8ddecd873c5b1f99c84b168988213281828e98b7175153c6bd67af18b91a9c7b.scope: Deactivated successfully.
Dec  3 18:05:02 compute-0 podman[214834]: 2025-12-03 18:05:02.970805479 +0000 UTC m=+0.176149438 container died 8ddecd873c5b1f99c84b168988213281828e98b7175153c6bd67af18b91a9c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-df4c2621f65b5f4b945a6dec3d014e16c4b6680af4e914cd88b45cf803c890d0-merged.mount: Deactivated successfully.
Dec  3 18:05:03 compute-0 podman[214853]: 2025-12-03 18:05:03.014640381 +0000 UTC m=+0.069850352 container create 7ed785cdce75f8a20fe8cd8ccd0a56f561686904c56335b2d5f286cf90a30bcc (image=quay.io/ceph/ceph:v18, name=elated_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:05:03 compute-0 podman[214834]: 2025-12-03 18:05:03.03727798 +0000 UTC m=+0.242621949 container remove 8ddecd873c5b1f99c84b168988213281828e98b7175153c6bd67af18b91a9c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:03 compute-0 systemd[1]: libpod-conmon-8ddecd873c5b1f99c84b168988213281828e98b7175153c6bd67af18b91a9c7b.scope: Deactivated successfully.
Dec  3 18:05:03 compute-0 systemd[1]: Started libpod-conmon-7ed785cdce75f8a20fe8cd8ccd0a56f561686904c56335b2d5f286cf90a30bcc.scope.
Dec  3 18:05:03 compute-0 podman[214853]: 2025-12-03 18:05:02.991021559 +0000 UTC m=+0.046231560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27212747a263c47309ba27bb377db989fdc40d4c76a984792b96334f94818284/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27212747a263c47309ba27bb377db989fdc40d4c76a984792b96334f94818284/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:03 compute-0 podman[214853]: 2025-12-03 18:05:03.123508648 +0000 UTC m=+0.178718679 container init 7ed785cdce75f8a20fe8cd8ccd0a56f561686904c56335b2d5f286cf90a30bcc (image=quay.io/ceph/ceph:v18, name=elated_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:05:03 compute-0 podman[214853]: 2025-12-03 18:05:03.136851631 +0000 UTC m=+0.192061622 container start 7ed785cdce75f8a20fe8cd8ccd0a56f561686904c56335b2d5f286cf90a30bcc (image=quay.io/ceph/ceph:v18, name=elated_austin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:05:03 compute-0 podman[214865]: 2025-12-03 18:05:03.143320837 +0000 UTC m=+0.144247625 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:05:03 compute-0 podman[214853]: 2025-12-03 18:05:03.14674132 +0000 UTC m=+0.201951351 container attach 7ed785cdce75f8a20fe8cd8ccd0a56f561686904c56335b2d5f286cf90a30bcc (image=quay.io/ceph/ceph:v18, name=elated_austin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:05:03 compute-0 podman[214914]: 2025-12-03 18:05:03.288952745 +0000 UTC m=+0.070664833 container create 061b7c5852fc512ce5e625e46ee03c274c88fa164ae9901c0c2209a14b13b902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 18:05:03 compute-0 podman[214914]: 2025-12-03 18:05:03.255134685 +0000 UTC m=+0.036846813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:03 compute-0 systemd[1]: Started libpod-conmon-061b7c5852fc512ce5e625e46ee03c274c88fa164ae9901c0c2209a14b13b902.scope.
Dec  3 18:05:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec63bdb7f9b9fb541b0b95f83772e890b6678547a4f54858d71d18e30f37d38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec63bdb7f9b9fb541b0b95f83772e890b6678547a4f54858d71d18e30f37d38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec63bdb7f9b9fb541b0b95f83772e890b6678547a4f54858d71d18e30f37d38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec63bdb7f9b9fb541b0b95f83772e890b6678547a4f54858d71d18e30f37d38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:03 compute-0 podman[214914]: 2025-12-03 18:05:03.423291178 +0000 UTC m=+0.205003246 container init 061b7c5852fc512ce5e625e46ee03c274c88fa164ae9901c0c2209a14b13b902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 18:05:03 compute-0 podman[214914]: 2025-12-03 18:05:03.432231765 +0000 UTC m=+0.213943853 container start 061b7c5852fc512ce5e625e46ee03c274c88fa164ae9901c0c2209a14b13b902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:05:03 compute-0 podman[214914]: 2025-12-03 18:05:03.447881384 +0000 UTC m=+0.229593442 container attach 061b7c5852fc512ce5e625e46ee03c274c88fa164ae9901c0c2209a14b13b902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banach, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:05:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec  3 18:05:03 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1270012011' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:05:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec  3 18:05:03 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec  3 18:05:03 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.697 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.698 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 18:05:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2502029268' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.716 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.716 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.720 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.721 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.722 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.721 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.722 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.723 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.724 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.724 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.724 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.725 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.725 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.726 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.726 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.727 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.723 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.727 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.728 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.728 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.729 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.729 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.730 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.730 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.730 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.732 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.732 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.733 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.731 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.734 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.734 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.735 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.735 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.736 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.736 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.737 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.737 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.738 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.738 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.738 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.738 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.739 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.739 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.740 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.742 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.742 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.742 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.743 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.743 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.743 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.743 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.743 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.744 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.744 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.744 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.744 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.745 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.745 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.745 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:05:03.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:05:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v68: 4 pgs: 1 creating+peering, 2 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:04 compute-0 cranky_banach[214931]: {
Dec  3 18:05:04 compute-0 cranky_banach[214931]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "osd_id": 1,
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "type": "bluestore"
Dec  3 18:05:04 compute-0 cranky_banach[214931]:    },
Dec  3 18:05:04 compute-0 cranky_banach[214931]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "osd_id": 2,
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "type": "bluestore"
Dec  3 18:05:04 compute-0 cranky_banach[214931]:    },
Dec  3 18:05:04 compute-0 cranky_banach[214931]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "osd_id": 0,
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:05:04 compute-0 cranky_banach[214931]:        "type": "bluestore"
Dec  3 18:05:04 compute-0 cranky_banach[214931]:    }
Dec  3 18:05:04 compute-0 cranky_banach[214931]: }
Dec  3 18:05:04 compute-0 systemd[1]: libpod-061b7c5852fc512ce5e625e46ee03c274c88fa164ae9901c0c2209a14b13b902.scope: Deactivated successfully.
Dec  3 18:05:04 compute-0 systemd[1]: libpod-061b7c5852fc512ce5e625e46ee03c274c88fa164ae9901c0c2209a14b13b902.scope: Consumed 1.035s CPU time.
Dec  3 18:05:04 compute-0 podman[214914]: 2025-12-03 18:05:04.466564837 +0000 UTC m=+1.248276895 container died 061b7c5852fc512ce5e625e46ee03c274c88fa164ae9901c0c2209a14b13b902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec  3 18:05:04 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/2502029268' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bec63bdb7f9b9fb541b0b95f83772e890b6678547a4f54858d71d18e30f37d38-merged.mount: Deactivated successfully.
Dec  3 18:05:04 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2502029268' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:05:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec  3 18:05:04 compute-0 elated_austin[214894]: pool 'images' created
Dec  3 18:05:04 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec  3 18:05:04 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:04 compute-0 systemd[1]: libpod-7ed785cdce75f8a20fe8cd8ccd0a56f561686904c56335b2d5f286cf90a30bcc.scope: Deactivated successfully.
Dec  3 18:05:04 compute-0 podman[214914]: 2025-12-03 18:05:04.575229788 +0000 UTC m=+1.356941856 container remove 061b7c5852fc512ce5e625e46ee03c274c88fa164ae9901c0c2209a14b13b902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banach, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:05:04 compute-0 podman[214853]: 2025-12-03 18:05:04.578561139 +0000 UTC m=+1.633771130 container died 7ed785cdce75f8a20fe8cd8ccd0a56f561686904c56335b2d5f286cf90a30bcc (image=quay.io/ceph/ceph:v18, name=elated_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:05:04 compute-0 systemd[1]: libpod-conmon-061b7c5852fc512ce5e625e46ee03c274c88fa164ae9901c0c2209a14b13b902.scope: Deactivated successfully.
Dec  3 18:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-27212747a263c47309ba27bb377db989fdc40d4c76a984792b96334f94818284-merged.mount: Deactivated successfully.
Dec  3 18:05:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:05:04 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:04 compute-0 podman[214853]: 2025-12-03 18:05:04.644030004 +0000 UTC m=+1.699239975 container remove 7ed785cdce75f8a20fe8cd8ccd0a56f561686904c56335b2d5f286cf90a30bcc (image=quay.io/ceph/ceph:v18, name=elated_austin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:05:04 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:04 compute-0 systemd[1]: libpod-conmon-7ed785cdce75f8a20fe8cd8ccd0a56f561686904c56335b2d5f286cf90a30bcc.scope: Deactivated successfully.
Dec  3 18:05:05 compute-0 python3[215082]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:05 compute-0 podman[215085]: 2025-12-03 18:05:05.126433499 +0000 UTC m=+0.070516759 container create 035099598e55be724493b8ea5ba0a9ffd098f334dda4f1c637b60807ddc6fc0f (image=quay.io/ceph/ceph:v18, name=flamboyant_carver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:05 compute-0 systemd[1]: Started libpod-conmon-035099598e55be724493b8ea5ba0a9ffd098f334dda4f1c637b60807ddc6fc0f.scope.
Dec  3 18:05:05 compute-0 podman[215085]: 2025-12-03 18:05:05.097418766 +0000 UTC m=+0.041502076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ace27739c67111daffc2732b0d7991d1f390cdb4410ef69b6dc435eb30c8815/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ace27739c67111daffc2732b0d7991d1f390cdb4410ef69b6dc435eb30c8815/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:05 compute-0 podman[215085]: 2025-12-03 18:05:05.253195378 +0000 UTC m=+0.197278748 container init 035099598e55be724493b8ea5ba0a9ffd098f334dda4f1c637b60807ddc6fc0f (image=quay.io/ceph/ceph:v18, name=flamboyant_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:05:05 compute-0 podman[215085]: 2025-12-03 18:05:05.270771584 +0000 UTC m=+0.214854854 container start 035099598e55be724493b8ea5ba0a9ffd098f334dda4f1c637b60807ddc6fc0f (image=quay.io/ceph/ceph:v18, name=flamboyant_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 18:05:05 compute-0 podman[215085]: 2025-12-03 18:05:05.276240967 +0000 UTC m=+0.220324277 container attach 035099598e55be724493b8ea5ba0a9ffd098f334dda4f1c637b60807ddc6fc0f (image=quay.io/ceph/ceph:v18, name=flamboyant_carver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:05:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec  3 18:05:05 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/2502029268' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:05:05 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:05 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec  3 18:05:05 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec  3 18:05:05 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v71: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 18:05:05 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1237838448' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:05:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec  3 18:05:06 compute-0 ceph-mon[192802]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 18:05:06 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1237838448' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:05:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec  3 18:05:06 compute-0 flamboyant_carver[215100]: pool 'cephfs.cephfs.meta' created
Dec  3 18:05:06 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec  3 18:05:06 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1237838448' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:05:06 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:06 compute-0 systemd[1]: libpod-035099598e55be724493b8ea5ba0a9ffd098f334dda4f1c637b60807ddc6fc0f.scope: Deactivated successfully.
Dec  3 18:05:06 compute-0 podman[215085]: 2025-12-03 18:05:06.597271662 +0000 UTC m=+1.541354952 container died 035099598e55be724493b8ea5ba0a9ffd098f334dda4f1c637b60807ddc6fc0f (image=quay.io/ceph/ceph:v18, name=flamboyant_carver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ace27739c67111daffc2732b0d7991d1f390cdb4410ef69b6dc435eb30c8815-merged.mount: Deactivated successfully.
Dec  3 18:05:06 compute-0 podman[215085]: 2025-12-03 18:05:06.680495528 +0000 UTC m=+1.624578838 container remove 035099598e55be724493b8ea5ba0a9ffd098f334dda4f1c637b60807ddc6fc0f (image=quay.io/ceph/ceph:v18, name=flamboyant_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:06 compute-0 systemd[1]: libpod-conmon-035099598e55be724493b8ea5ba0a9ffd098f334dda4f1c637b60807ddc6fc0f.scope: Deactivated successfully.
Dec  3 18:05:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:07 compute-0 python3[215163]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:07 compute-0 podman[215164]: 2025-12-03 18:05:07.239901726 +0000 UTC m=+0.098806033 container create a03dd613e8a0dd3bda8905c1c89061b155cc14a3e64f52e374e9fbb54c19030b (image=quay.io/ceph/ceph:v18, name=gifted_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:05:07 compute-0 podman[215164]: 2025-12-03 18:05:07.202311076 +0000 UTC m=+0.061215453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:07 compute-0 systemd[1]: Started libpod-conmon-a03dd613e8a0dd3bda8905c1c89061b155cc14a3e64f52e374e9fbb54c19030b.scope.
Dec  3 18:05:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c5c6df60b1f03f131c5fb06eaa91028997ad7ce10337cf46c673fbbd0dc7db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c5c6df60b1f03f131c5fb06eaa91028997ad7ce10337cf46c673fbbd0dc7db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:07 compute-0 podman[215164]: 2025-12-03 18:05:07.410264233 +0000 UTC m=+0.269168620 container init a03dd613e8a0dd3bda8905c1c89061b155cc14a3e64f52e374e9fbb54c19030b (image=quay.io/ceph/ceph:v18, name=gifted_cannon, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:07 compute-0 podman[215164]: 2025-12-03 18:05:07.420123662 +0000 UTC m=+0.279027979 container start a03dd613e8a0dd3bda8905c1c89061b155cc14a3e64f52e374e9fbb54c19030b (image=quay.io/ceph/ceph:v18, name=gifted_cannon, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Dec  3 18:05:07 compute-0 podman[215164]: 2025-12-03 18:05:07.427146432 +0000 UTC m=+0.286050819 container attach a03dd613e8a0dd3bda8905c1c89061b155cc14a3e64f52e374e9fbb54c19030b (image=quay.io/ceph/ceph:v18, name=gifted_cannon, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:05:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec  3 18:05:07 compute-0 ceph-mon[192802]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 18:05:07 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1237838448' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:05:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec  3 18:05:07 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec  3 18:05:07 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v74: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 18:05:07 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2059972605' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:05:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec  3 18:05:08 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/2059972605' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 18:05:08 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2059972605' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:05:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec  3 18:05:08 compute-0 gifted_cannon[215179]: pool 'cephfs.cephfs.data' created
Dec  3 18:05:08 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec  3 18:05:08 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:08 compute-0 systemd[1]: libpod-a03dd613e8a0dd3bda8905c1c89061b155cc14a3e64f52e374e9fbb54c19030b.scope: Deactivated successfully.
Dec  3 18:05:08 compute-0 podman[215164]: 2025-12-03 18:05:08.661667662 +0000 UTC m=+1.520571999 container died a03dd613e8a0dd3bda8905c1c89061b155cc14a3e64f52e374e9fbb54c19030b (image=quay.io/ceph/ceph:v18, name=gifted_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 18:05:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-58c5c6df60b1f03f131c5fb06eaa91028997ad7ce10337cf46c673fbbd0dc7db-merged.mount: Deactivated successfully.
Dec  3 18:05:08 compute-0 podman[215164]: 2025-12-03 18:05:08.73383234 +0000 UTC m=+1.592736647 container remove a03dd613e8a0dd3bda8905c1c89061b155cc14a3e64f52e374e9fbb54c19030b (image=quay.io/ceph/ceph:v18, name=gifted_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 18:05:08 compute-0 systemd[1]: libpod-conmon-a03dd613e8a0dd3bda8905c1c89061b155cc14a3e64f52e374e9fbb54c19030b.scope: Deactivated successfully.
Dec  3 18:05:09 compute-0 python3[215243]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:09 compute-0 podman[215244]: 2025-12-03 18:05:09.284920477 +0000 UTC m=+0.078767078 container create 5d3cd09882f4dd52592599514a5ada33721b313adf7ffacaa065e4dc26a274b2 (image=quay.io/ceph/ceph:v18, name=eloquent_heyrovsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:05:09 compute-0 podman[215244]: 2025-12-03 18:05:09.255203798 +0000 UTC m=+0.049050419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:09 compute-0 systemd[1]: Started libpod-conmon-5d3cd09882f4dd52592599514a5ada33721b313adf7ffacaa065e4dc26a274b2.scope.
Dec  3 18:05:09 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796070082cef7c6f7e809d0f476f5b4c7388b2fc7501b970bda9fdd50a9e19c8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796070082cef7c6f7e809d0f476f5b4c7388b2fc7501b970bda9fdd50a9e19c8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:09 compute-0 podman[215244]: 2025-12-03 18:05:09.454601207 +0000 UTC m=+0.248447858 container init 5d3cd09882f4dd52592599514a5ada33721b313adf7ffacaa065e4dc26a274b2 (image=quay.io/ceph/ceph:v18, name=eloquent_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:05:09 compute-0 podman[215244]: 2025-12-03 18:05:09.471636459 +0000 UTC m=+0.265483070 container start 5d3cd09882f4dd52592599514a5ada33721b313adf7ffacaa065e4dc26a274b2 (image=quay.io/ceph/ceph:v18, name=eloquent_heyrovsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:05:09 compute-0 podman[215244]: 2025-12-03 18:05:09.478923755 +0000 UTC m=+0.272770406 container attach 5d3cd09882f4dd52592599514a5ada33721b313adf7ffacaa065e4dc26a274b2 (image=quay.io/ceph/ceph:v18, name=eloquent_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:05:09 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/2059972605' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 18:05:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec  3 18:05:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec  3 18:05:09 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec  3 18:05:09 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Dec  3 18:05:10 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3193957886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  3 18:05:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec  3 18:05:10 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3193957886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  3 18:05:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec  3 18:05:10 compute-0 eloquent_heyrovsky[215259]: enabled application 'rbd' on pool 'vms'
Dec  3 18:05:10 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec  3 18:05:10 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3193957886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  3 18:05:10 compute-0 systemd[1]: libpod-5d3cd09882f4dd52592599514a5ada33721b313adf7ffacaa065e4dc26a274b2.scope: Deactivated successfully.
Dec  3 18:05:10 compute-0 podman[215244]: 2025-12-03 18:05:10.676369138 +0000 UTC m=+1.470215739 container died 5d3cd09882f4dd52592599514a5ada33721b313adf7ffacaa065e4dc26a274b2 (image=quay.io/ceph/ceph:v18, name=eloquent_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-796070082cef7c6f7e809d0f476f5b4c7388b2fc7501b970bda9fdd50a9e19c8-merged.mount: Deactivated successfully.
Dec  3 18:05:10 compute-0 podman[215244]: 2025-12-03 18:05:10.745889581 +0000 UTC m=+1.539736182 container remove 5d3cd09882f4dd52592599514a5ada33721b313adf7ffacaa065e4dc26a274b2 (image=quay.io/ceph/ceph:v18, name=eloquent_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:10 compute-0 systemd[1]: libpod-conmon-5d3cd09882f4dd52592599514a5ada33721b313adf7ffacaa065e4dc26a274b2.scope: Deactivated successfully.
Dec  3 18:05:11 compute-0 python3[215321]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:11 compute-0 podman[215322]: 2025-12-03 18:05:11.286877755 +0000 UTC m=+0.089108510 container create d3e421c2ac3a88f41e43fad493a430fa89486fdd71e5e060e5c9f7e828bdd9b9 (image=quay.io/ceph/ceph:v18, name=flamboyant_pasteur, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:11 compute-0 podman[215322]: 2025-12-03 18:05:11.251395475 +0000 UTC m=+0.053626290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:11 compute-0 systemd[1]: Started libpod-conmon-d3e421c2ac3a88f41e43fad493a430fa89486fdd71e5e060e5c9f7e828bdd9b9.scope.
Dec  3 18:05:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9d9adfd3aac11af33b5aef408483edbc65cb8584aed6b106c68ee9e0019290/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9d9adfd3aac11af33b5aef408483edbc65cb8584aed6b106c68ee9e0019290/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:11 compute-0 podman[215322]: 2025-12-03 18:05:11.422766136 +0000 UTC m=+0.224996951 container init d3e421c2ac3a88f41e43fad493a430fa89486fdd71e5e060e5c9f7e828bdd9b9 (image=quay.io/ceph/ceph:v18, name=flamboyant_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:11 compute-0 podman[215322]: 2025-12-03 18:05:11.431370154 +0000 UTC m=+0.233600869 container start d3e421c2ac3a88f41e43fad493a430fa89486fdd71e5e060e5c9f7e828bdd9b9 (image=quay.io/ceph/ceph:v18, name=flamboyant_pasteur, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:11 compute-0 podman[215322]: 2025-12-03 18:05:11.435541185 +0000 UTC m=+0.237771980 container attach d3e421c2ac3a88f41e43fad493a430fa89486fdd71e5e060e5c9f7e828bdd9b9 (image=quay.io/ceph/ceph:v18, name=flamboyant_pasteur, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:11 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3193957886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  3 18:05:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:11 compute-0 ceph-mon[192802]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 18:05:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Dec  3 18:05:11 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/272685552' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  3 18:05:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec  3 18:05:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/272685552' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  3 18:05:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec  3 18:05:12 compute-0 flamboyant_pasteur[215336]: enabled application 'rbd' on pool 'volumes'
Dec  3 18:05:12 compute-0 ceph-mon[192802]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 18:05:12 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/272685552' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  3 18:05:12 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec  3 18:05:12 compute-0 systemd[1]: libpod-d3e421c2ac3a88f41e43fad493a430fa89486fdd71e5e060e5c9f7e828bdd9b9.scope: Deactivated successfully.
Dec  3 18:05:12 compute-0 podman[215322]: 2025-12-03 18:05:12.702962822 +0000 UTC m=+1.505193537 container died d3e421c2ac3a88f41e43fad493a430fa89486fdd71e5e060e5c9f7e828bdd9b9 (image=quay.io/ceph/ceph:v18, name=flamboyant_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 18:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd9d9adfd3aac11af33b5aef408483edbc65cb8584aed6b106c68ee9e0019290-merged.mount: Deactivated successfully.
Dec  3 18:05:12 compute-0 podman[215322]: 2025-12-03 18:05:12.762711209 +0000 UTC m=+1.564941924 container remove d3e421c2ac3a88f41e43fad493a430fa89486fdd71e5e060e5c9f7e828bdd9b9 (image=quay.io/ceph/ceph:v18, name=flamboyant_pasteur, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:05:12 compute-0 systemd[1]: libpod-conmon-d3e421c2ac3a88f41e43fad493a430fa89486fdd71e5e060e5c9f7e828bdd9b9.scope: Deactivated successfully.
Dec  3 18:05:13 compute-0 python3[215399]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:13 compute-0 podman[215400]: 2025-12-03 18:05:13.229083984 +0000 UTC m=+0.087401478 container create 22f9f4d4c832821ee22b89385d7fbc7ecc41f14d851a08257c8f7e31677d290c (image=quay.io/ceph/ceph:v18, name=upbeat_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:05:13 compute-0 podman[215400]: 2025-12-03 18:05:13.191881373 +0000 UTC m=+0.050198917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:13 compute-0 systemd[1]: Started libpod-conmon-22f9f4d4c832821ee22b89385d7fbc7ecc41f14d851a08257c8f7e31677d290c.scope.
Dec  3 18:05:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04850c510231d1ca641d5de9ae451b0757f411c06ecceef15aa62797e507d7ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04850c510231d1ca641d5de9ae451b0757f411c06ecceef15aa62797e507d7ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:13 compute-0 podman[215400]: 2025-12-03 18:05:13.368429469 +0000 UTC m=+0.226746953 container init 22f9f4d4c832821ee22b89385d7fbc7ecc41f14d851a08257c8f7e31677d290c (image=quay.io/ceph/ceph:v18, name=upbeat_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:05:13 compute-0 podman[215400]: 2025-12-03 18:05:13.379868417 +0000 UTC m=+0.238185851 container start 22f9f4d4c832821ee22b89385d7fbc7ecc41f14d851a08257c8f7e31677d290c (image=quay.io/ceph/ceph:v18, name=upbeat_wu, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:05:13 compute-0 podman[215400]: 2025-12-03 18:05:13.385060022 +0000 UTC m=+0.243377506 container attach 22f9f4d4c832821ee22b89385d7fbc7ecc41f14d851a08257c8f7e31677d290c (image=quay.io/ceph/ceph:v18, name=upbeat_wu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 18:05:13 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/272685552' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:05:13
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'backups']
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 18:05:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 18:05:13 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:05:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Dec  3 18:05:13 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1235616113' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  3 18:05:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec  3 18:05:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:14 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1235616113' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  3 18:05:14 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:14 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1235616113' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  3 18:05:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec  3 18:05:14 compute-0 upbeat_wu[215415]: enabled application 'rbd' on pool 'backups'
Dec  3 18:05:14 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec  3 18:05:14 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev 9e754801-f991-481c-a600-68d154243f77 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  3 18:05:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 18:05:14 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:14 compute-0 podman[215400]: 2025-12-03 18:05:14.724750429 +0000 UTC m=+1.583067863 container died 22f9f4d4c832821ee22b89385d7fbc7ecc41f14d851a08257c8f7e31677d290c (image=quay.io/ceph/ceph:v18, name=upbeat_wu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:14 compute-0 systemd[1]: libpod-22f9f4d4c832821ee22b89385d7fbc7ecc41f14d851a08257c8f7e31677d290c.scope: Deactivated successfully.
Dec  3 18:05:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-04850c510231d1ca641d5de9ae451b0757f411c06ecceef15aa62797e507d7ca-merged.mount: Deactivated successfully.
Dec  3 18:05:14 compute-0 podman[215400]: 2025-12-03 18:05:14.774176026 +0000 UTC m=+1.632493460 container remove 22f9f4d4c832821ee22b89385d7fbc7ecc41f14d851a08257c8f7e31677d290c (image=quay.io/ceph/ceph:v18, name=upbeat_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:05:14 compute-0 systemd[1]: libpod-conmon-22f9f4d4c832821ee22b89385d7fbc7ecc41f14d851a08257c8f7e31677d290c.scope: Deactivated successfully.
Dec  3 18:05:15 compute-0 python3[215476]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:15 compute-0 podman[215477]: 2025-12-03 18:05:15.240721986 +0000 UTC m=+0.077396416 container create 7f995cbe7ddf35ee7ce6d7a9d88be77b0b06e90bffdfe89b862051ed4196dcc4 (image=quay.io/ceph/ceph:v18, name=confident_vaughan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:05:15 compute-0 podman[215477]: 2025-12-03 18:05:15.204822226 +0000 UTC m=+0.041496706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:15 compute-0 systemd[1]: Started libpod-conmon-7f995cbe7ddf35ee7ce6d7a9d88be77b0b06e90bffdfe89b862051ed4196dcc4.scope.
Dec  3 18:05:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b561a1f540e74179d09d135f54f519af5a169cb551d870dd7d28829b552ad782/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b561a1f540e74179d09d135f54f519af5a169cb551d870dd7d28829b552ad782/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:15 compute-0 podman[215477]: 2025-12-03 18:05:15.3899434 +0000 UTC m=+0.226617810 container init 7f995cbe7ddf35ee7ce6d7a9d88be77b0b06e90bffdfe89b862051ed4196dcc4 (image=quay.io/ceph/ceph:v18, name=confident_vaughan, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:15 compute-0 podman[215477]: 2025-12-03 18:05:15.399400059 +0000 UTC m=+0.236074469 container start 7f995cbe7ddf35ee7ce6d7a9d88be77b0b06e90bffdfe89b862051ed4196dcc4 (image=quay.io/ceph/ceph:v18, name=confident_vaughan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:15 compute-0 podman[215477]: 2025-12-03 18:05:15.403827556 +0000 UTC m=+0.240501986 container attach 7f995cbe7ddf35ee7ce6d7a9d88be77b0b06e90bffdfe89b862051ed4196dcc4 (image=quay.io/ceph/ceph:v18, name=confident_vaughan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:05:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec  3 18:05:15 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec  3 18:05:15 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec  3 18:05:15 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev f348c879-2332-4a45-937e-485e817f58fd (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  3 18:05:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 18:05:15 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:15 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:15 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1235616113' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  3 18:05:15 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:15 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:15 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Dec  3 18:05:16 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1162296916' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  3 18:05:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  3 18:05:16 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:16 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:16 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:16 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1162296916' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  3 18:05:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec  3 18:05:16 compute-0 confident_vaughan[215492]: enabled application 'rbd' on pool 'images'
Dec  3 18:05:16 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec  3 18:05:16 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev 1cfdb686-7700-41fe-994e-130bba33e391 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  3 18:05:16 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=14.694299698s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active pruub 56.824367523s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 18:05:16 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:16 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=14.694299698s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown pruub 56.824367523s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:16 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1162296916' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  3 18:05:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:16 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1162296916' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  3 18:05:16 compute-0 systemd[1]: libpod-7f995cbe7ddf35ee7ce6d7a9d88be77b0b06e90bffdfe89b862051ed4196dcc4.scope: Deactivated successfully.
Dec  3 18:05:16 compute-0 podman[215517]: 2025-12-03 18:05:16.815332732 +0000 UTC m=+0.039628120 container died 7f995cbe7ddf35ee7ce6d7a9d88be77b0b06e90bffdfe89b862051ed4196dcc4 (image=quay.io/ceph/ceph:v18, name=confident_vaughan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 18:05:16 compute-0 ceph-mon[192802]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 18:05:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b561a1f540e74179d09d135f54f519af5a169cb551d870dd7d28829b552ad782-merged.mount: Deactivated successfully.
Dec  3 18:05:16 compute-0 podman[215517]: 2025-12-03 18:05:16.885235935 +0000 UTC m=+0.109531323 container remove 7f995cbe7ddf35ee7ce6d7a9d88be77b0b06e90bffdfe89b862051ed4196dcc4 (image=quay.io/ceph/ceph:v18, name=confident_vaughan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:05:16 compute-0 systemd[1]: libpod-conmon-7f995cbe7ddf35ee7ce6d7a9d88be77b0b06e90bffdfe89b862051ed4196dcc4.scope: Deactivated successfully.
Dec  3 18:05:17 compute-0 python3[215556]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:17 compute-0 podman[215557]: 2025-12-03 18:05:17.349246914 +0000 UTC m=+0.051319924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec  3 18:05:17 compute-0 podman[215557]: 2025-12-03 18:05:17.780747965 +0000 UTC m=+0.482820905 container create 596209f630b14788c65e62ee9d516c4742fd18e133e8f3df20cf97bb52d91987 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 18:05:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec  3 18:05:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v87: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:17 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec  3 18:05:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:17 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev 84b40669-1de7-4037-8fbe-dc5205035363 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  3 18:05:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 18:05:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:17 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:17 compute-0 ceph-mon[192802]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=33/34 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:17 compute-0 systemd[1]: Started libpod-conmon-596209f630b14788c65e62ee9d516c4742fd18e133e8f3df20cf97bb52d91987.scope.
Dec  3 18:05:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a6abdd83020fec4a58c22dd054f176a407678febdbe8ca36885c8cb59221e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a6abdd83020fec4a58c22dd054f176a407678febdbe8ca36885c8cb59221e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:18 compute-0 podman[215557]: 2025-12-03 18:05:18.07541146 +0000 UTC m=+0.777484400 container init 596209f630b14788c65e62ee9d516c4742fd18e133e8f3df20cf97bb52d91987 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:18 compute-0 podman[215557]: 2025-12-03 18:05:18.094975054 +0000 UTC m=+0.797047994 container start 596209f630b14788c65e62ee9d516c4742fd18e133e8f3df20cf97bb52d91987 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 18:05:18 compute-0 podman[215557]: 2025-12-03 18:05:18.100704033 +0000 UTC m=+0.802777013 container attach 596209f630b14788c65e62ee9d516c4742fd18e133e8f3df20cf97bb52d91987 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Dec  3 18:05:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3234865755' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  3 18:05:18 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Dec  3 18:05:18 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Dec  3 18:05:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec  3 18:05:18 compute-0 ceph-mgr[193091]: [progress WARNING root] Starting Global Recovery Event,63 pgs not in active + clean state
Dec  3 18:05:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3234865755' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  3 18:05:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec  3 18:05:18 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec  3 18:05:18 compute-0 cranky_clarke[215572]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec  3 18:05:18 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev 991aa2ff-c566-4ad7-bf07-5bf1083d8492 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  3 18:05:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 18:05:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:18 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3234865755' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  3 18:05:18 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=10.598105431s) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 54.981109619s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:18 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=10.598105431s) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown pruub 54.981109619s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 systemd[1]: libpod-596209f630b14788c65e62ee9d516c4742fd18e133e8f3df20cf97bb52d91987.scope: Deactivated successfully.
Dec  3 18:05:19 compute-0 podman[215557]: 2025-12-03 18:05:19.004112893 +0000 UTC m=+1.706185823 container died 596209f630b14788c65e62ee9d516c4742fd18e133e8f3df20cf97bb52d91987 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7a6abdd83020fec4a58c22dd054f176a407678febdbe8ca36885c8cb59221e6-merged.mount: Deactivated successfully.
Dec  3 18:05:19 compute-0 podman[215557]: 2025-12-03 18:05:19.101497592 +0000 UTC m=+1.803570522 container remove 596209f630b14788c65e62ee9d516c4742fd18e133e8f3df20cf97bb52d91987 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:05:19 compute-0 systemd[1]: libpod-conmon-596209f630b14788c65e62ee9d516c4742fd18e133e8f3df20cf97bb52d91987.scope: Deactivated successfully.
Dec  3 18:05:19 compute-0 podman[215598]: 2025-12-03 18:05:19.16499316 +0000 UTC m=+0.129054577 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:05:19 compute-0 podman[215608]: 2025-12-03 18:05:19.170417782 +0000 UTC m=+0.115666993 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  3 18:05:19 compute-0 podman[215606]: 2025-12-03 18:05:19.186707376 +0000 UTC m=+0.122683853 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 18:05:19 compute-0 podman[215614]: 2025-12-03 18:05:19.186963692 +0000 UTC m=+0.132436678 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Dec  3 18:05:19 compute-0 podman[215607]: 2025-12-03 18:05:19.189991256 +0000 UTC m=+0.148456057 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=8.309394836s) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 67.986961365s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=8.309394836s) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown pruub 67.986961365s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33 pruub=14.251163483s) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active pruub 65.264953613s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33 pruub=14.251163483s) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown pruub 65.264953613s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.17( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.18( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.1e( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.1b( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.1c( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.1f( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.3( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.4( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.19( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.13( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.14( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.7( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.8( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.a( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.16( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.f( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.10( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.5( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.6( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.b( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.c( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.d( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.12( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.1( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 35 pg[3.2( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 python3[215730]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:19 compute-0 podman[215731]: 2025-12-03 18:05:19.539556472 +0000 UTC m=+0.076587246 container create c5338b1432c1aa9fb9af57cc8d6fd2996b50a39a2353979f7b031c812df6df23 (image=quay.io/ceph/ceph:v18, name=amazing_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:05:19 compute-0 systemd[1]: Started libpod-conmon-c5338b1432c1aa9fb9af57cc8d6fd2996b50a39a2353979f7b031c812df6df23.scope.
Dec  3 18:05:19 compute-0 podman[215731]: 2025-12-03 18:05:19.51471433 +0000 UTC m=+0.051745084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5cc121e62f21a6f3e9df118da0ff6ed0f58d8739d0c9716f746eed853104a5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5cc121e62f21a6f3e9df118da0ff6ed0f58d8739d0c9716f746eed853104a5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:19 compute-0 podman[215731]: 2025-12-03 18:05:19.669222902 +0000 UTC m=+0.206253686 container init c5338b1432c1aa9fb9af57cc8d6fd2996b50a39a2353979f7b031c812df6df23 (image=quay.io/ceph/ceph:v18, name=amazing_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 18:05:19 compute-0 podman[215731]: 2025-12-03 18:05:19.676527829 +0000 UTC m=+0.213558563 container start c5338b1432c1aa9fb9af57cc8d6fd2996b50a39a2353979f7b031c812df6df23 (image=quay.io/ceph/ceph:v18, name=amazing_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:05:19 compute-0 podman[215731]: 2025-12-03 18:05:19.682693588 +0000 UTC m=+0.219724372 container attach c5338b1432c1aa9fb9af57cc8d6fd2996b50a39a2353979f7b031c812df6df23 (image=quay.io/ceph/ceph:v18, name=amazing_mendeleev, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:19 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec  3 18:05:19 compute-0 podman[215750]: 2025-12-03 18:05:19.733654193 +0000 UTC m=+0.082556121 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:05:19 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v89: 131 pgs: 2 peering, 124 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec  3 18:05:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec  3 18:05:19 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev 8d4e3b72-da51-4c6b-be3b-74f63d620172 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev 9e754801-f991-481c-a600-68d154243f77 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event 9e754801-f991-481c-a600-68d154243f77 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev f348c879-2332-4a45-937e-485e817f58fd (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event f348c879-2332-4a45-937e-485e817f58fd (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev 1cfdb686-7700-41fe-994e-130bba33e391 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event 1cfdb686-7700-41fe-994e-130bba33e391 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev 84b40669-1de7-4037-8fbe-dc5205035363 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event 84b40669-1de7-4037-8fbe-dc5205035363 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev 991aa2ff-c566-4ad7-bf07-5bf1083d8492 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event 991aa2ff-c566-4ad7-bf07-5bf1083d8492 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev 8d4e3b72-da51-4c6b-be3b-74f63d620172 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  3 18:05:19 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event 8d4e3b72-da51-4c6b-be3b-74f63d620172 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=36 pruub=11.626026154s) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active pruub 72.068252563s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:19 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3234865755' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  3 18:05:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:05:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:05:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.0( empty local-lis/les=35/36 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:19 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=36 pruub=11.626026154s) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown pruub 72.068252563s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=35/36 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.1f( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.1b( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.1e( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.8( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.1c( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.7( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.1( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.6( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=33/36 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.2( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.b( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.d( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.4( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.10( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.14( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.13( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.12( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.18( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.19( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.17( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 36 pg[3.1a( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Dec  3 18:05:20 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1581635100' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  3 18:05:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec  3 18:05:20 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1581635100' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  3 18:05:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec  3 18:05:20 compute-0 amazing_mendeleev[215747]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec  3 18:05:21 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1a( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.15( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.14( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.17( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.16( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.11( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.10( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.13( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.12( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.d( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.c( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.f( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.e( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.2( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1b( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.6( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.b( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.3( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.18( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.7( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.8( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.4( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.19( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.5( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.9( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1e( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.a( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1d( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1f( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1c( empty local-lis/les=25/26 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1a( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 systemd[1]: libpod-c5338b1432c1aa9fb9af57cc8d6fd2996b50a39a2353979f7b031c812df6df23.scope: Deactivated successfully.
Dec  3 18:05:21 compute-0 podman[215731]: 2025-12-03 18:05:21.026829063 +0000 UTC m=+1.563859797 container died c5338b1432c1aa9fb9af57cc8d6fd2996b50a39a2353979f7b031c812df6df23 (image=quay.io/ceph/ceph:v18, name=amazing_mendeleev, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.14( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.16( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.15( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.11( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.10( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.12( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.f( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.c( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.2( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.13( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=36/37 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.e( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1b( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.18( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.d( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.7( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.3( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.4( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.6( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.5( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.8( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.b( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1e( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.a( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1d( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1f( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.1c( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.17( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.9( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 37 pg[6.19( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=25/25 les/c/f=26/26/0 sis=36) [0] r=0 lpr=36 pi=[25,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:21 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1581635100' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  3 18:05:21 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1581635100' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  3 18:05:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad5cc121e62f21a6f3e9df118da0ff6ed0f58d8739d0c9716f746eed853104a5-merged.mount: Deactivated successfully.
Dec  3 18:05:21 compute-0 podman[215731]: 2025-12-03 18:05:21.287022235 +0000 UTC m=+1.824052979 container remove c5338b1432c1aa9fb9af57cc8d6fd2996b50a39a2353979f7b031c812df6df23 (image=quay.io/ceph/ceph:v18, name=amazing_mendeleev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:21 compute-0 systemd[1]: libpod-conmon-c5338b1432c1aa9fb9af57cc8d6fd2996b50a39a2353979f7b031c812df6df23.scope: Deactivated successfully.
Dec  3 18:05:21 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec  3 18:05:21 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec  3 18:05:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v92: 162 pgs: 2 peering, 124 unknown, 36 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:21 compute-0 ceph-mon[192802]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 18:05:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec  3 18:05:22 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  3 18:05:22 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  3 18:05:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec  3 18:05:22 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec  3 18:05:22 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=38 pruub=11.397875786s) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active pruub 65.434112549s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:22 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=38 pruub=11.397875786s) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown pruub 65.434112549s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:22 compute-0 ceph-mon[192802]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 18:05:22 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec  3 18:05:22 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec  3 18:05:22 compute-0 python3[215884]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 18:05:22 compute-0 python3[215955]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764785122.0382442-37753-123336536791619/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:05:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec  3 18:05:23 compute-0 ceph-mon[192802]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  3 18:05:23 compute-0 ceph-mon[192802]: Cluster is now healthy
Dec  3 18:05:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:05:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec  3 18:05:23 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1e( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1c( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1d( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.13( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.12( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.10( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.16( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.15( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.14( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.17( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.b( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.a( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.9( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.8( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.f( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.6( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.4( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.2( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.3( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.e( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.d( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.18( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.19( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1a( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1b( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.7( empty local-lis/les=27/28 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1d( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1e( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1c( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.12( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.10( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.13( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.15( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.16( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.b( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.14( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.17( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.9( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.f( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.a( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.6( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=38/39 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.4( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.2( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.3( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.d( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.18( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.19( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1a( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.1b( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.e( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.8( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 39 pg[7.7( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=27/27 les/c/f=28/28/0 sis=38) [1] r=0 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:23 compute-0 python3[216057]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 18:05:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 1 peering, 93 unknown, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:23 compute-0 ceph-mgr[193091]: [progress INFO root] Writing back 9 completed events
Dec  3 18:05:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 18:05:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:24 compute-0 python3[216132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764785123.3665078-37767-238921576340515/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=4c1959fe09320d5198789613ac43e844fb5fd47d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:05:24 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec  3 18:05:24 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec  3 18:05:24 compute-0 python3[216182]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:24 compute-0 podman[216183]: 2025-12-03 18:05:24.823857608 +0000 UTC m=+0.056903409 container create 7ee056a3e85ce4c84f934f17bd9bf2eb8da7bb57833841a21a80d28141b9d3e8 (image=quay.io/ceph/ceph:v18, name=vibrant_lovelace, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:05:24 compute-0 systemd[194616]: Starting Mark boot as successful...
Dec  3 18:05:24 compute-0 systemd[194616]: Finished Mark boot as successful.
Dec  3 18:05:24 compute-0 podman[216183]: 2025-12-03 18:05:24.800966514 +0000 UTC m=+0.034012295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:24 compute-0 systemd[1]: Started libpod-conmon-7ee056a3e85ce4c84f934f17bd9bf2eb8da7bb57833841a21a80d28141b9d3e8.scope.
Dec  3 18:05:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d0772b37ed259e22f485183d854022566dff0d8f5fb265e159edcdaf5a28e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d0772b37ed259e22f485183d854022566dff0d8f5fb265e159edcdaf5a28e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d0772b37ed259e22f485183d854022566dff0d8f5fb265e159edcdaf5a28e4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:24 compute-0 podman[216183]: 2025-12-03 18:05:24.968802039 +0000 UTC m=+0.201847850 container init 7ee056a3e85ce4c84f934f17bd9bf2eb8da7bb57833841a21a80d28141b9d3e8 (image=quay.io/ceph/ceph:v18, name=vibrant_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:24 compute-0 podman[216183]: 2025-12-03 18:05:24.978950735 +0000 UTC m=+0.211996496 container start 7ee056a3e85ce4c84f934f17bd9bf2eb8da7bb57833841a21a80d28141b9d3e8 (image=quay.io/ceph/ceph:v18, name=vibrant_lovelace, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:24 compute-0 podman[216183]: 2025-12-03 18:05:24.984244413 +0000 UTC m=+0.217290194 container attach 7ee056a3e85ce4c84f934f17bd9bf2eb8da7bb57833841a21a80d28141b9d3e8 (image=quay.io/ceph/ceph:v18, name=vibrant_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:05:25 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec  3 18:05:25 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec  3 18:05:25 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  3 18:05:25 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3257006915' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 18:05:25 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3257006915' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  3 18:05:25 compute-0 vibrant_lovelace[216199]: 
Dec  3 18:05:25 compute-0 vibrant_lovelace[216199]: [global]
Dec  3 18:05:25 compute-0 vibrant_lovelace[216199]: #011fsid = c1caf3ba-b2a5-5005-a11e-e955c344dccc
Dec  3 18:05:25 compute-0 vibrant_lovelace[216199]: #011mon_host = 192.168.122.100
Dec  3 18:05:25 compute-0 systemd[1]: libpod-7ee056a3e85ce4c84f934f17bd9bf2eb8da7bb57833841a21a80d28141b9d3e8.scope: Deactivated successfully.
Dec  3 18:05:25 compute-0 podman[216183]: 2025-12-03 18:05:25.613340039 +0000 UTC m=+0.846385811 container died 7ee056a3e85ce4c84f934f17bd9bf2eb8da7bb57833841a21a80d28141b9d3e8 (image=quay.io/ceph/ceph:v18, name=vibrant_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-99d0772b37ed259e22f485183d854022566dff0d8f5fb265e159edcdaf5a28e4-merged.mount: Deactivated successfully.
Dec  3 18:05:25 compute-0 podman[216183]: 2025-12-03 18:05:25.700600033 +0000 UTC m=+0.933645844 container remove 7ee056a3e85ce4c84f934f17bd9bf2eb8da7bb57833841a21a80d28141b9d3e8 (image=quay.io/ceph/ceph:v18, name=vibrant_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:25 compute-0 systemd[1]: libpod-conmon-7ee056a3e85ce4c84f934f17bd9bf2eb8da7bb57833841a21a80d28141b9d3e8.scope: Deactivated successfully.
Dec  3 18:05:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:26 compute-0 python3[216334]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:26 compute-0 podman[216360]: 2025-12-03 18:05:26.238998333 +0000 UTC m=+0.098098417 container create db56488ddac04c0fd797c176aefad94e8d4ef3ffaf503c0629a5408c03f9918b (image=quay.io/ceph/ceph:v18, name=gracious_varahamihira, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:05:26 compute-0 systemd[1]: Started libpod-conmon-db56488ddac04c0fd797c176aefad94e8d4ef3ffaf503c0629a5408c03f9918b.scope.
Dec  3 18:05:26 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3257006915' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 18:05:26 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3257006915' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  3 18:05:26 compute-0 podman[216360]: 2025-12-03 18:05:26.213778802 +0000 UTC m=+0.072878916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bcf091fc19b684c0fe9b72d6ba148301be31b97ee5cdf1f5d786c4df93c0de4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bcf091fc19b684c0fe9b72d6ba148301be31b97ee5cdf1f5d786c4df93c0de4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bcf091fc19b684c0fe9b72d6ba148301be31b97ee5cdf1f5d786c4df93c0de4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:26 compute-0 podman[216360]: 2025-12-03 18:05:26.339202569 +0000 UTC m=+0.198302663 container init db56488ddac04c0fd797c176aefad94e8d4ef3ffaf503c0629a5408c03f9918b (image=quay.io/ceph/ceph:v18, name=gracious_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:26 compute-0 podman[216360]: 2025-12-03 18:05:26.351758804 +0000 UTC m=+0.210858878 container start db56488ddac04c0fd797c176aefad94e8d4ef3ffaf503c0629a5408c03f9918b (image=quay.io/ceph/ceph:v18, name=gracious_varahamihira, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:05:26 compute-0 podman[216360]: 2025-12-03 18:05:26.357172275 +0000 UTC m=+0.216272359 container attach db56488ddac04c0fd797c176aefad94e8d4ef3ffaf503c0629a5408c03f9918b (image=quay.io/ceph/ceph:v18, name=gracious_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:05:26 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec  3 18:05:26 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec  3 18:05:26 compute-0 podman[216447]: 2025-12-03 18:05:26.788782058 +0000 UTC m=+0.095538174 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 18:05:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:26 compute-0 podman[216447]: 2025-12-03 18:05:26.910144028 +0000 UTC m=+0.216900034 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3820359179' entity='client.admin' 
Dec  3 18:05:27 compute-0 gracious_varahamihira[216393]: set ssl_option
Dec  3 18:05:27 compute-0 systemd[1]: libpod-db56488ddac04c0fd797c176aefad94e8d4ef3ffaf503c0629a5408c03f9918b.scope: Deactivated successfully.
Dec  3 18:05:27 compute-0 podman[216360]: 2025-12-03 18:05:27.145061538 +0000 UTC m=+1.004161612 container died db56488ddac04c0fd797c176aefad94e8d4ef3ffaf503c0629a5408c03f9918b (image=quay.io/ceph/ceph:v18, name=gracious_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bcf091fc19b684c0fe9b72d6ba148301be31b97ee5cdf1f5d786c4df93c0de4-merged.mount: Deactivated successfully.
Dec  3 18:05:27 compute-0 podman[216360]: 2025-12-03 18:05:27.235070248 +0000 UTC m=+1.094170322 container remove db56488ddac04c0fd797c176aefad94e8d4ef3ffaf503c0629a5408c03f9918b (image=quay.io/ceph/ceph:v18, name=gracious_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:27 compute-0 systemd[1]: libpod-conmon-db56488ddac04c0fd797c176aefad94e8d4ef3ffaf503c0629a5408c03f9918b.scope: Deactivated successfully.
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3820359179' entity='client.admin' 
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:27 compute-0 python3[216612]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:27 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev ba079d2c-1bbc-4f67-b1d5-bcaf51f31c10 does not exist
Dec  3 18:05:27 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e249887e-d16f-434c-ac25-38c008a5a164 does not exist
Dec  3 18:05:27 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 885e1ec1-dba7-45a6-9d1f-b72f3a94135b does not exist
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:05:27 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec  3 18:05:27 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec  3 18:05:27 compute-0 podman[216620]: 2025-12-03 18:05:27.764111441 +0000 UTC m=+0.078288687 container create 91066e120848624adfd3c83a91240effecdaf0371df3816734a76e9a82286756 (image=quay.io/ceph/ceph:v18, name=objective_gates, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 18:05:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:27 compute-0 podman[216620]: 2025-12-03 18:05:27.726369047 +0000 UTC m=+0.040546303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:27 compute-0 systemd[1]: Started libpod-conmon-91066e120848624adfd3c83a91240effecdaf0371df3816734a76e9a82286756.scope.
Dec  3 18:05:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2503f736745d91d7746dbd6de859107c8705558af7a3f861689c27baa796057a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2503f736745d91d7746dbd6de859107c8705558af7a3f861689c27baa796057a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2503f736745d91d7746dbd6de859107c8705558af7a3f861689c27baa796057a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:27 compute-0 podman[216620]: 2025-12-03 18:05:27.903345623 +0000 UTC m=+0.217522899 container init 91066e120848624adfd3c83a91240effecdaf0371df3816734a76e9a82286756 (image=quay.io/ceph/ceph:v18, name=objective_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:27 compute-0 podman[216620]: 2025-12-03 18:05:27.914952195 +0000 UTC m=+0.229129421 container start 91066e120848624adfd3c83a91240effecdaf0371df3816734a76e9a82286756 (image=quay.io/ceph/ceph:v18, name=objective_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:05:27 compute-0 podman[216620]: 2025-12-03 18:05:27.921076103 +0000 UTC m=+0.235253349 container attach 91066e120848624adfd3c83a91240effecdaf0371df3816734a76e9a82286756 (image=quay.io/ceph/ceph:v18, name=objective_gates, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:05:28 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:05:28 compute-0 ceph-mgr[193091]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec  3 18:05:28 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec  3 18:05:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  3 18:05:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:28 compute-0 objective_gates[216658]: Scheduled rgw.rgw update...
Dec  3 18:05:28 compute-0 systemd[1]: libpod-91066e120848624adfd3c83a91240effecdaf0371df3816734a76e9a82286756.scope: Deactivated successfully.
Dec  3 18:05:28 compute-0 podman[216620]: 2025-12-03 18:05:28.611024823 +0000 UTC m=+0.925202029 container died 91066e120848624adfd3c83a91240effecdaf0371df3816734a76e9a82286756 (image=quay.io/ceph/ceph:v18, name=objective_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:28 compute-0 podman[216794]: 2025-12-03 18:05:28.620421141 +0000 UTC m=+0.078625016 container create 10f5e866ddc4e38bb1870d0680ca95caf9af98e872918ca711945b7c23eb1161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:05:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2503f736745d91d7746dbd6de859107c8705558af7a3f861689c27baa796057a-merged.mount: Deactivated successfully.
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:05:28 compute-0 ceph-mon[192802]: Saving service rgw.rgw spec with placement compute-0
Dec  3 18:05:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:28 compute-0 podman[216794]: 2025-12-03 18:05:28.580412572 +0000 UTC m=+0.038616517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:28 compute-0 podman[216620]: 2025-12-03 18:05:28.683880428 +0000 UTC m=+0.998057634 container remove 91066e120848624adfd3c83a91240effecdaf0371df3816734a76e9a82286756 (image=quay.io/ceph/ceph:v18, name=objective_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:05:28 compute-0 systemd[1]: Started libpod-conmon-10f5e866ddc4e38bb1870d0680ca95caf9af98e872918ca711945b7c23eb1161.scope.
Dec  3 18:05:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec  3 18:05:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:28 compute-0 systemd[1]: libpod-conmon-91066e120848624adfd3c83a91240effecdaf0371df3816734a76e9a82286756.scope: Deactivated successfully.
Dec  3 18:05:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec  3 18:05:28 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.302166939s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.468978882s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.315695763s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.482727051s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.315574646s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.482727051s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.302057266s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.468978882s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.315176010s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.482505798s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.315126419s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.482505798s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.316167831s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483642578s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.301255226s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.469032288s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.314930916s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.482757568s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.314911842s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.482757568s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.300615311s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.468612671s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.300608635s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.468666077s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.315401077s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483642578s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.300095558s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.468635559s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.300073624s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.468635559s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.300281525s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.469032288s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.314218521s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483016968s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.299783707s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.468635559s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.314167976s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483016968s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.299762726s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.468635559s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.313810349s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.482925415s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.313775063s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.482925415s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.299430847s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.468612671s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.300135612s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.469497681s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.313516617s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.482887268s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.300101280s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.469497681s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.313456535s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.482887268s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.299386978s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.468856812s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.299364090s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.468856812s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.313424110s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.482986450s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.299222946s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.468818665s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.313361168s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.482986450s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.313353539s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483039856s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.313325882s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483039856s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.313216209s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483001709s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.299197197s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.468818665s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.313192368s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483001709s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298916817s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.468864441s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298929214s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.468925476s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.312700272s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483070374s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.312678337s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483070374s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298852921s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.469413757s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298824310s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.469413757s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.312730789s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483421326s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.312714577s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483421326s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298893929s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.468925476s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298564911s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.469398499s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298547745s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.469398499s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298415184s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.469421387s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298398018s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.469421387s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.312441826s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483543396s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.312416077s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483543396s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298268318s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.469429016s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298251152s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.469429016s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298203468s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.469459534s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.312228203s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483512878s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298179626s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.469459534s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.312211037s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483512878s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298778534s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.468864441s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.311794281s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483261108s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.311776161s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483261108s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298057556s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.469482422s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.297897339s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.469558716s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.297880173s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.469558716s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.297759056s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.469482422s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.297665596s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.469497681s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.297639847s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.469497681s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.297627449s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 84.469573975s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.297602654s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.469573975s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.311573029s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483612061s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.311549187s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483612061s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.311483383s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483612061s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.311459541s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483612061s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.311224937s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483581543s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.311146736s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483581543s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.298544884s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.468666077s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.310258865s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 77.483551025s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=36/37 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=8.310232162s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.483551025s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[4.d( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.c( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.d( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[4.f( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.2( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.6( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.4( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[4.7( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.e( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[6.f( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.b( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 podman[216794]: 2025-12-03 18:05:28.73598895 +0000 UTC m=+0.194192815 container init 10f5e866ddc4e38bb1870d0680ca95caf9af98e872918ca711945b7c23eb1161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[6.8( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[6.14( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[6.15( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[6.11( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[6.13( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[6.1f( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.135746956s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.273025513s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.135724068s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.273025513s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261055946s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.398468018s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261041641s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.398468018s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.263997078s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.401504517s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.263978958s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.401504517s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.135408401s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.273025513s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.135385513s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.273048401s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.135369301s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.273048401s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.135364532s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.273025513s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.135253906s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.273040771s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.135235786s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.273040771s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261137962s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.399024963s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261116028s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.399024963s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.135097504s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.273040771s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.135076523s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.273040771s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.260938644s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.399093628s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.260915756s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.399093628s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261351585s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.399559021s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261318207s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.399559021s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.134469032s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272758484s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.134449005s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272758484s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261028290s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.399459839s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261771202s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.400184631s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261010170s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.399459839s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261713982s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.400184631s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.134242058s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272750854s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.134203911s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272750854s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.262729645s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.401321411s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.262708664s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.401321411s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.134043694s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272743225s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.134019852s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272743225s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261449814s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.400215149s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.133963585s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272735596s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.133937836s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272735596s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.261420250s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.400215149s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.283205986s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.422203064s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.133700371s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272727966s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.283179283s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.422203064s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.133676529s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272727966s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.133505821s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272712708s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.133482933s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272712708s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.268222809s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.409797668s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[4.12( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.268177032s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.409797668s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.130011559s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272743225s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.129958153s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272743225s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.129796028s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272735596s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.259225845s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.402221680s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.129757881s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272735596s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.259202957s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.402221680s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.129505157s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272651672s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.129483223s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272651672s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.266590118s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.409782410s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[4.10( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.1d( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.266555786s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.409782410s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.129394531s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272682190s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.129374504s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272682190s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.258827209s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.402236938s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.258803368s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.402236938s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.129278183s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272720337s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.129243851s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272720337s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.266123772s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.409690857s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.266101837s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.409690857s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.120742798s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.264419556s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.120721817s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.264419556s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.128849983s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.272628784s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.128818512s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.272628784s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.120501518s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.264411926s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.120478630s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.264411926s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.278129578s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.422195435s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.119702339s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.263809204s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.278097153s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.422195435s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.119681358s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.263809204s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.119544029s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.263793945s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.119528770s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.263793945s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.277879715s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.422187805s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.277845383s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.422187805s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.265938759s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.410308838s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.265921593s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.410308838s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.257072449s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.402206421s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.257008553s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.402206421s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.118527412s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.263809204s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.118486404s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.263809204s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.264472008s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.410324097s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=15.264445305s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.410324097s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=0/0 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.126325607s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.273010254s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=13.126139641s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.273010254s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.559973717s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.091911316s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.293321609s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.825584412s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.293284416s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.825584412s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.293202400s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.825363159s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.292822838s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.825363159s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.559462547s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092048645s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.559431076s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092048645s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.558578491s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.091941833s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.558546066s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.091941833s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 podman[216794]: 2025-12-03 18:05:28.744825034 +0000 UTC m=+0.203028899 container start 10f5e866ddc4e38bb1870d0680ca95caf9af98e872918ca711945b7c23eb1161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.291584969s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.825256348s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.291543961s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.825256348s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.291277885s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.825340271s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.291206360s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.825340271s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.290868759s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.825263977s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.290851593s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.825263977s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.290108681s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.824630737s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.290092468s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.824630737s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.19( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[5.14( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.11( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.f( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[3.18( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.557706833s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092369080s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.557682991s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092369080s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.558147430s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092086792s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.290525436s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.825332642s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.557490349s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092315674s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.557270050s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092086792s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.290450096s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.825332642s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.289463043s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.824516296s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.557249069s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092315674s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.557606697s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092681885s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.289424896s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.824516296s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.557577133s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092681885s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.557050705s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092399597s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.290632248s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.825370789s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.557082176s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092437744s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.557049751s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092437744s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.557011604s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092399597s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.556877136s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092346191s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.556849480s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092346191s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.288784027s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.824348450s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.556850433s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092453003s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.288754463s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.824348450s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.289958954s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.825599670s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.289982796s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.825370789s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.556820869s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092453003s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.289934158s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.825599670s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.556675911s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092468262s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.288517952s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.824333191s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.556651115s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092468262s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.288486481s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.824333191s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.288381577s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.824348450s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.288358688s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.824348450s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.556389809s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092491150s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.556359291s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092491150s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.288134575s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.824295044s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.288062096s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.824295044s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.556128502s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092514038s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.556102753s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092514038s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.287674904s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.824218750s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.287646294s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.824218750s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.555829048s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092529297s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.287457466s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.824180603s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.555799484s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092529297s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.287427902s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.824180603s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.286482811s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.823371887s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.555768013s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092666626s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.555725098s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092666626s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.555481911s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092544556s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.555465698s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092544556s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.284371376s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.821525574s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.284357071s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.821525574s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.555077553s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092597961s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.555060387s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092597961s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.559931755s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.091911316s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.285154343s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.824211121s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.285119057s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.824211121s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.260359764s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.799591064s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.260309219s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.799591064s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.553306580s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092636108s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.553284645s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092636108s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 podman[216794]: 2025-12-03 18:05:28.751053625 +0000 UTC m=+0.209257490 container attach 10f5e866ddc4e38bb1870d0680ca95caf9af98e872918ca711945b7c23eb1161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[5.5( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[7.11( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[3.11( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[3.e( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[7.a( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[7.15( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[7.8( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[7.5( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.1d( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[2.1f( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.280842781s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 75.821662903s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.280816078s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.821662903s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 objective_poincare[216824]: 167 167
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.17( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[7.13( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 podman[216794]: 2025-12-03 18:05:28.754766115 +0000 UTC m=+0.212969980 container died 10f5e866ddc4e38bb1870d0680ca95caf9af98e872918ca711945b7c23eb1161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:28 compute-0 systemd[1]: libpod-10f5e866ddc4e38bb1870d0680ca95caf9af98e872918ca711945b7c23eb1161.scope: Deactivated successfully.
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.12( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[3.5( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[3.7( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[7.1( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[3.8( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[7.c( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[7.e( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[3.16( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[7.2( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[7.1c( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[7.9( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[7.4( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=33/36 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40 pruub=15.283442497s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.823371887s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[7.6( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[7.f( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.545796394s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 71.092651367s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.1( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[3.1d( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[7.1a( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 40 pg[3.1e( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=38/39 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=10.544795990s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.092651367s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.6( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[7.3( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[7.1f( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[7.18( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 40 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.1f( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 40 pg[7.1b( empty local-lis/les=0/0 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-91bdd1ccfcdca30b831926944ea2e9de70ecfb4572b38b8e286558fd2706b3a9-merged.mount: Deactivated successfully.
Dec  3 18:05:28 compute-0 podman[216794]: 2025-12-03 18:05:28.80533605 +0000 UTC m=+0.263539905 container remove 10f5e866ddc4e38bb1870d0680ca95caf9af98e872918ca711945b7c23eb1161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:05:28 compute-0 systemd[1]: libpod-conmon-10f5e866ddc4e38bb1870d0680ca95caf9af98e872918ca711945b7c23eb1161.scope: Deactivated successfully.
Dec  3 18:05:28 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event de4dd7c6-501f-40e6-a61d-8f159d2a1d7a (Global Recovery Event) in 10 seconds
Dec  3 18:05:28 compute-0 podman[216848]: 2025-12-03 18:05:28.986817565 +0000 UTC m=+0.053503087 container create a5eb695753f0870ca2f70a6ec5f0bcb62f9bcd547bf86c6d122275127c153932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ellis, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 18:05:29 compute-0 systemd[1]: Started libpod-conmon-a5eb695753f0870ca2f70a6ec5f0bcb62f9bcd547bf86c6d122275127c153932.scope.
Dec  3 18:05:29 compute-0 podman[216848]: 2025-12-03 18:05:28.967106277 +0000 UTC m=+0.033791819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce196d46a77dabb0ab33978f41f6393536dd13610f4c766ddd6ce35032348c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce196d46a77dabb0ab33978f41f6393536dd13610f4c766ddd6ce35032348c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce196d46a77dabb0ab33978f41f6393536dd13610f4c766ddd6ce35032348c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce196d46a77dabb0ab33978f41f6393536dd13610f4c766ddd6ce35032348c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce196d46a77dabb0ab33978f41f6393536dd13610f4c766ddd6ce35032348c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:29 compute-0 podman[216848]: 2025-12-03 18:05:29.117866589 +0000 UTC m=+0.184552141 container init a5eb695753f0870ca2f70a6ec5f0bcb62f9bcd547bf86c6d122275127c153932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:05:29 compute-0 podman[216848]: 2025-12-03 18:05:29.137551256 +0000 UTC m=+0.204236778 container start a5eb695753f0870ca2f70a6ec5f0bcb62f9bcd547bf86c6d122275127c153932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ellis, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:29 compute-0 podman[216848]: 2025-12-03 18:05:29.142615769 +0000 UTC m=+0.209301391 container attach a5eb695753f0870ca2f70a6ec5f0bcb62f9bcd547bf86c6d122275127c153932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:05:29 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec  3 18:05:29 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec  3 18:05:29 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:29 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:29 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:29 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:29 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:29 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:05:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec  3 18:05:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec  3 18:05:29 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec  3 18:05:29 compute-0 podman[158200]: time="2025-12-03T18:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:05:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30881 "" "Go-http-client/1.1"
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.1f( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[5.14( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.12( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.11( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.15( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[7.13( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.16( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.9( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.8( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.b( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.a( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[7.f( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[7.3( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.6( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[5.3( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.3( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.1f( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.2( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[5.5( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.f( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.1c( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[7.6( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.17( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[7.18( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.1d( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[7.9( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.c( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.1( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[7.4( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.18( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[2.19( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[3.f( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[5.1e( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[6.f( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Dec  3 18:05:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6259 "" "Go-http-client/1.1"
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.1e( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.1d( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[6.8( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[6.15( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[6.14( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[6.11( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[6.13( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.c( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.d( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.2( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.e( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.b( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.17( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.1d( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.1c( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=40/41 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.6( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.4( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=38/27 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 41 pg[6.1f( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Dec  3 18:05:29 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 41 pg[6.1( empty local-lis/les=40/41 n=0 ec=36/25 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 41 peering, 152 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:29 compute-0 python3[216944]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 18:05:30 compute-0 friendly_ellis[216863]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:05:30 compute-0 friendly_ellis[216863]: --> relative data size: 1.0
Dec  3 18:05:30 compute-0 friendly_ellis[216863]: --> All data devices are unavailable
Dec  3 18:05:30 compute-0 python3[217033]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764785129.5117607-37812-112024026731358/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:05:30 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec  3 18:05:30 compute-0 systemd[1]: libpod-a5eb695753f0870ca2f70a6ec5f0bcb62f9bcd547bf86c6d122275127c153932.scope: Deactivated successfully.
Dec  3 18:05:30 compute-0 systemd[1]: libpod-a5eb695753f0870ca2f70a6ec5f0bcb62f9bcd547bf86c6d122275127c153932.scope: Consumed 1.165s CPU time.
Dec  3 18:05:30 compute-0 podman[216848]: 2025-12-03 18:05:30.388841592 +0000 UTC m=+1.455527114 container died a5eb695753f0870ca2f70a6ec5f0bcb62f9bcd547bf86c6d122275127c153932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:30 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec  3 18:05:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ce196d46a77dabb0ab33978f41f6393536dd13610f4c766ddd6ce35032348c6-merged.mount: Deactivated successfully.
Dec  3 18:05:30 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec  3 18:05:30 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec  3 18:05:30 compute-0 podman[216848]: 2025-12-03 18:05:30.494523081 +0000 UTC m=+1.561208603 container remove a5eb695753f0870ca2f70a6ec5f0bcb62f9bcd547bf86c6d122275127c153932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:05:30 compute-0 systemd[1]: libpod-conmon-a5eb695753f0870ca2f70a6ec5f0bcb62f9bcd547bf86c6d122275127c153932.scope: Deactivated successfully.
Dec  3 18:05:30 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec  3 18:05:30 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec  3 18:05:30 compute-0 python3[217173]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:31 compute-0 podman[217200]: 2025-12-03 18:05:31.086776656 +0000 UTC m=+0.079851095 container create d4e4756eabd45313f8d5312f0aa00912765c64316fea0445b20c98a10d92eda1 (image=quay.io/ceph/ceph:v18, name=laughing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:31 compute-0 podman[217200]: 2025-12-03 18:05:31.0538904 +0000 UTC m=+0.046964929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:31 compute-0 systemd[1]: Started libpod-conmon-d4e4756eabd45313f8d5312f0aa00912765c64316fea0445b20c98a10d92eda1.scope.
Dec  3 18:05:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d56c75bf7377d8730a242b39fbbecd5d1825df5c4849a2701a8f214abf59271/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d56c75bf7377d8730a242b39fbbecd5d1825df5c4849a2701a8f214abf59271/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d56c75bf7377d8730a242b39fbbecd5d1825df5c4849a2701a8f214abf59271/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:31 compute-0 podman[217200]: 2025-12-03 18:05:31.273395305 +0000 UTC m=+0.266469794 container init d4e4756eabd45313f8d5312f0aa00912765c64316fea0445b20c98a10d92eda1 (image=quay.io/ceph/ceph:v18, name=laughing_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:31 compute-0 podman[217200]: 2025-12-03 18:05:31.283040369 +0000 UTC m=+0.276114818 container start d4e4756eabd45313f8d5312f0aa00912765c64316fea0445b20c98a10d92eda1 (image=quay.io/ceph/ceph:v18, name=laughing_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:05:31 compute-0 podman[217200]: 2025-12-03 18:05:31.299536489 +0000 UTC m=+0.292610948 container attach d4e4756eabd45313f8d5312f0aa00912765c64316fea0445b20c98a10d92eda1 (image=quay.io/ceph/ceph:v18, name=laughing_franklin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:05:31 compute-0 openstack_network_exporter[160319]: ERROR   18:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:05:31 compute-0 openstack_network_exporter[160319]: ERROR   18:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:05:31 compute-0 openstack_network_exporter[160319]: ERROR   18:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:05:31 compute-0 openstack_network_exporter[160319]: ERROR   18:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:05:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:05:31 compute-0 openstack_network_exporter[160319]: ERROR   18:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:05:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:05:31 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec  3 18:05:31 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec  3 18:05:31 compute-0 podman[217256]: 2025-12-03 18:05:31.525282566 +0000 UTC m=+0.091751963 container create 7fc2d46f4cdf025f2474015da37778b146f50e44001db2d5f11b0747dd0817f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:31 compute-0 podman[217256]: 2025-12-03 18:05:31.475801958 +0000 UTC m=+0.042271415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:31 compute-0 systemd[1]: Started libpod-conmon-7fc2d46f4cdf025f2474015da37778b146f50e44001db2d5f11b0747dd0817f5.scope.
Dec  3 18:05:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:31 compute-0 podman[217256]: 2025-12-03 18:05:31.73643553 +0000 UTC m=+0.302904927 container init 7fc2d46f4cdf025f2474015da37778b146f50e44001db2d5f11b0747dd0817f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:31 compute-0 podman[217256]: 2025-12-03 18:05:31.747938259 +0000 UTC m=+0.314407666 container start 7fc2d46f4cdf025f2474015da37778b146f50e44001db2d5f11b0747dd0817f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:05:31 compute-0 eager_moore[217290]: 167 167
Dec  3 18:05:31 compute-0 systemd[1]: libpod-7fc2d46f4cdf025f2474015da37778b146f50e44001db2d5f11b0747dd0817f5.scope: Deactivated successfully.
Dec  3 18:05:31 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec  3 18:05:31 compute-0 podman[217256]: 2025-12-03 18:05:31.764309296 +0000 UTC m=+0.330778723 container attach 7fc2d46f4cdf025f2474015da37778b146f50e44001db2d5f11b0747dd0817f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:05:31 compute-0 podman[217256]: 2025-12-03 18:05:31.765489024 +0000 UTC m=+0.331958411 container died 7fc2d46f4cdf025f2474015da37778b146f50e44001db2d5f11b0747dd0817f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:31 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec  3 18:05:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 41 peering, 152 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:31 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:05:31 compute-0 ceph-mgr[193091]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  3 18:05:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Dec  3 18:05:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  3 18:05:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Dec  3 18:05:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  3 18:05:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdc2139e89375608c68e7f462fe69913044efddd3b541cdfd01f36ccad18d363-merged.mount: Deactivated successfully.
Dec  3 18:05:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Dec  3 18:05:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  3 18:05:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec  3 18:05:31 compute-0 ceph-mon[192802]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  3 18:05:31 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0[192798]: 2025-12-03T18:05:31.929+0000 7ff376da0640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  3 18:05:31 compute-0 ceph-mon[192802]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  3 18:05:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  3 18:05:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e2 new map
Dec  3 18:05:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T18:05:31.930236+0000#012modified#0112025-12-03T18:05:31.930775+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Dec  3 18:05:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec  3 18:05:31 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec  3 18:05:31 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec  3 18:05:31 compute-0 ceph-mgr[193091]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  3 18:05:31 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  3 18:05:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  3 18:05:32 compute-0 podman[217256]: 2025-12-03 18:05:32.000991588 +0000 UTC m=+0.567461055 container remove 7fc2d46f4cdf025f2474015da37778b146f50e44001db2d5f11b0747dd0817f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:05:32 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:32 compute-0 ceph-mgr[193091]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  3 18:05:32 compute-0 systemd[1]: libpod-conmon-7fc2d46f4cdf025f2474015da37778b146f50e44001db2d5f11b0747dd0817f5.scope: Deactivated successfully.
Dec  3 18:05:32 compute-0 systemd[1]: libpod-d4e4756eabd45313f8d5312f0aa00912765c64316fea0445b20c98a10d92eda1.scope: Deactivated successfully.
Dec  3 18:05:32 compute-0 podman[217200]: 2025-12-03 18:05:32.038984148 +0000 UTC m=+1.032058667 container died d4e4756eabd45313f8d5312f0aa00912765c64316fea0445b20c98a10d92eda1 (image=quay.io/ceph/ceph:v18, name=laughing_franklin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 18:05:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d56c75bf7377d8730a242b39fbbecd5d1825df5c4849a2701a8f214abf59271-merged.mount: Deactivated successfully.
Dec  3 18:05:32 compute-0 podman[217200]: 2025-12-03 18:05:32.24590626 +0000 UTC m=+1.238980709 container remove d4e4756eabd45313f8d5312f0aa00912765c64316fea0445b20c98a10d92eda1 (image=quay.io/ceph/ceph:v18, name=laughing_franklin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 18:05:32 compute-0 systemd[1]: libpod-conmon-d4e4756eabd45313f8d5312f0aa00912765c64316fea0445b20c98a10d92eda1.scope: Deactivated successfully.
Dec  3 18:05:32 compute-0 podman[217328]: 2025-12-03 18:05:32.332308223 +0000 UTC m=+0.152494775 container create 7cc690efbfd19e09791a926c76197322547a2dd1ea7c045b1eb82f873e9980eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:05:32 compute-0 podman[217328]: 2025-12-03 18:05:32.26036558 +0000 UTC m=+0.080552182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:32 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec  3 18:05:32 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec  3 18:05:32 compute-0 systemd[1]: Started libpod-conmon-7cc690efbfd19e09791a926c76197322547a2dd1ea7c045b1eb82f873e9980eb.scope.
Dec  3 18:05:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3fd357177f04c77adf99cc21117f88c2374df2c43f90b2c76a5a622ff9031b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3fd357177f04c77adf99cc21117f88c2374df2c43f90b2c76a5a622ff9031b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3fd357177f04c77adf99cc21117f88c2374df2c43f90b2c76a5a622ff9031b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3fd357177f04c77adf99cc21117f88c2374df2c43f90b2c76a5a622ff9031b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:32 compute-0 podman[217328]: 2025-12-03 18:05:32.587417861 +0000 UTC m=+0.407604413 container init 7cc690efbfd19e09791a926c76197322547a2dd1ea7c045b1eb82f873e9980eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:32 compute-0 podman[217328]: 2025-12-03 18:05:32.601883421 +0000 UTC m=+0.422069943 container start 7cc690efbfd19e09791a926c76197322547a2dd1ea7c045b1eb82f873e9980eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  3 18:05:32 compute-0 podman[217328]: 2025-12-03 18:05:32.669611252 +0000 UTC m=+0.489797794 container attach 7cc690efbfd19e09791a926c76197322547a2dd1ea7c045b1eb82f873e9980eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:32 compute-0 python3[217372]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:32 compute-0 podman[217375]: 2025-12-03 18:05:32.75373751 +0000 UTC m=+0.053402644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  3 18:05:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  3 18:05:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  3 18:05:32 compute-0 ceph-mon[192802]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  3 18:05:32 compute-0 ceph-mon[192802]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  3 18:05:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  3 18:05:32 compute-0 ceph-mon[192802]: Saving service mds.cephfs spec with placement compute-0
Dec  3 18:05:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:32 compute-0 podman[217375]: 2025-12-03 18:05:32.926804051 +0000 UTC m=+0.226469145 container create 4372015e385a15ec63c0768b6c093abe31538ed7b41ba1c4850fa8be0e6f94cc (image=quay.io/ceph/ceph:v18, name=cranky_hopper, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 18:05:33 compute-0 systemd[1]: Started libpod-conmon-4372015e385a15ec63c0768b6c093abe31538ed7b41ba1c4850fa8be0e6f94cc.scope.
Dec  3 18:05:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d661a67ed13669901a7e60895ececb0f1822cb0da01d3fa809c5877669e5b4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d661a67ed13669901a7e60895ececb0f1822cb0da01d3fa809c5877669e5b4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d661a67ed13669901a7e60895ececb0f1822cb0da01d3fa809c5877669e5b4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:33 compute-0 podman[217375]: 2025-12-03 18:05:33.281256865 +0000 UTC m=+0.580921959 container init 4372015e385a15ec63c0768b6c093abe31538ed7b41ba1c4850fa8be0e6f94cc (image=quay.io/ceph/ceph:v18, name=cranky_hopper, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:33 compute-0 podman[217375]: 2025-12-03 18:05:33.299233091 +0000 UTC m=+0.598898155 container start 4372015e385a15ec63c0768b6c093abe31538ed7b41ba1c4850fa8be0e6f94cc (image=quay.io/ceph/ceph:v18, name=cranky_hopper, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:33 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec  3 18:05:33 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec  3 18:05:33 compute-0 podman[217375]: 2025-12-03 18:05:33.37019909 +0000 UTC m=+0.669864234 container attach 4372015e385a15ec63c0768b6c093abe31538ed7b41ba1c4850fa8be0e6f94cc (image=quay.io/ceph/ceph:v18, name=cranky_hopper, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:33 compute-0 fervent_yonath[217344]: {
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:    "0": [
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:        {
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "devices": [
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "/dev/loop3"
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            ],
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_name": "ceph_lv0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_size": "21470642176",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "name": "ceph_lv0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "tags": {
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.cluster_name": "ceph",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.crush_device_class": "",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.encrypted": "0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.osd_id": "0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.type": "block",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.vdo": "0"
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            },
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "type": "block",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "vg_name": "ceph_vg0"
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:        }
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:    ],
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:    "1": [
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:        {
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "devices": [
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "/dev/loop4"
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            ],
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_name": "ceph_lv1",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_size": "21470642176",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "name": "ceph_lv1",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "tags": {
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.cluster_name": "ceph",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.crush_device_class": "",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.encrypted": "0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.osd_id": "1",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.type": "block",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.vdo": "0"
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            },
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "type": "block",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "vg_name": "ceph_vg1"
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:        }
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:    ],
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:    "2": [
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:        {
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "devices": [
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "/dev/loop5"
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            ],
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_name": "ceph_lv2",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_size": "21470642176",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "name": "ceph_lv2",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "tags": {
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.cluster_name": "ceph",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.crush_device_class": "",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.encrypted": "0",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.osd_id": "2",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.type": "block",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:                "ceph.vdo": "0"
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            },
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "type": "block",
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:            "vg_name": "ceph_vg2"
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:        }
Dec  3 18:05:33 compute-0 fervent_yonath[217344]:    ]
Dec  3 18:05:33 compute-0 fervent_yonath[217344]: }
Dec  3 18:05:33 compute-0 systemd[1]: libpod-7cc690efbfd19e09791a926c76197322547a2dd1ea7c045b1eb82f873e9980eb.scope: Deactivated successfully.
Dec  3 18:05:33 compute-0 conmon[217344]: conmon 7cc690efbfd19e09791a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cc690efbfd19e09791a926c76197322547a2dd1ea7c045b1eb82f873e9980eb.scope/container/memory.events
Dec  3 18:05:33 compute-0 podman[217328]: 2025-12-03 18:05:33.549157305 +0000 UTC m=+1.369343867 container died 7cc690efbfd19e09791a926c76197322547a2dd1ea7c045b1eb82f873e9980eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:33 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec  3 18:05:33 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec  3 18:05:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 41 peering, 152 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d3fd357177f04c77adf99cc21117f88c2374df2c43f90b2c76a5a622ff9031b-merged.mount: Deactivated successfully.
Dec  3 18:05:33 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 18:05:33 compute-0 ceph-mgr[193091]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  3 18:05:33 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  3 18:05:33 compute-0 ceph-mgr[193091]: [progress INFO root] Writing back 10 completed events
Dec  3 18:05:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  3 18:05:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 18:05:34 compute-0 podman[217328]: 2025-12-03 18:05:34.084735626 +0000 UTC m=+1.904922178 container remove 7cc690efbfd19e09791a926c76197322547a2dd1ea7c045b1eb82f873e9980eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 18:05:34 compute-0 systemd[1]: libpod-conmon-7cc690efbfd19e09791a926c76197322547a2dd1ea7c045b1eb82f873e9980eb.scope: Deactivated successfully.
Dec  3 18:05:34 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:34 compute-0 cranky_hopper[217390]: Scheduled mds.cephfs update...
Dec  3 18:05:34 compute-0 systemd[1]: libpod-4372015e385a15ec63c0768b6c093abe31538ed7b41ba1c4850fa8be0e6f94cc.scope: Deactivated successfully.
Dec  3 18:05:34 compute-0 podman[217375]: 2025-12-03 18:05:34.175664218 +0000 UTC m=+1.475329302 container died 4372015e385a15ec63c0768b6c093abe31538ed7b41ba1c4850fa8be0e6f94cc (image=quay.io/ceph/ceph:v18, name=cranky_hopper, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:34 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:34 compute-0 podman[217398]: 2025-12-03 18:05:34.391571827 +0000 UTC m=+0.818192977 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:05:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9d661a67ed13669901a7e60895ececb0f1822cb0da01d3fa809c5877669e5b4-merged.mount: Deactivated successfully.
Dec  3 18:05:34 compute-0 podman[217375]: 2025-12-03 18:05:34.770220949 +0000 UTC m=+2.069886003 container remove 4372015e385a15ec63c0768b6c093abe31538ed7b41ba1c4850fa8be0e6f94cc (image=quay.io/ceph/ceph:v18, name=cranky_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:05:34 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec  3 18:05:34 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec  3 18:05:34 compute-0 systemd[1]: libpod-conmon-4372015e385a15ec63c0768b6c093abe31538ed7b41ba1c4850fa8be0e6f94cc.scope: Deactivated successfully.
Dec  3 18:05:35 compute-0 podman[217605]: 2025-12-03 18:05:35.17199172 +0000 UTC m=+0.099002829 container create 6eaa757dec9021db267d8691f1b731471a151cb29e5443919001059d2d880ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:05:35 compute-0 ceph-mon[192802]: Saving service mds.cephfs spec with placement compute-0
Dec  3 18:05:35 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:35 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:35 compute-0 podman[217605]: 2025-12-03 18:05:35.117582642 +0000 UTC m=+0.044593741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:35 compute-0 systemd[1]: Started libpod-conmon-6eaa757dec9021db267d8691f1b731471a151cb29e5443919001059d2d880ec6.scope.
Dec  3 18:05:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:35 compute-0 podman[217605]: 2025-12-03 18:05:35.420923209 +0000 UTC m=+0.347934308 container init 6eaa757dec9021db267d8691f1b731471a151cb29e5443919001059d2d880ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:35 compute-0 podman[217605]: 2025-12-03 18:05:35.43088077 +0000 UTC m=+0.357891849 container start 6eaa757dec9021db267d8691f1b731471a151cb29e5443919001059d2d880ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:35 compute-0 gracious_pasteur[217672]: 167 167
Dec  3 18:05:35 compute-0 systemd[1]: libpod-6eaa757dec9021db267d8691f1b731471a151cb29e5443919001059d2d880ec6.scope: Deactivated successfully.
Dec  3 18:05:35 compute-0 podman[217605]: 2025-12-03 18:05:35.475383108 +0000 UTC m=+0.402394247 container attach 6eaa757dec9021db267d8691f1b731471a151cb29e5443919001059d2d880ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 18:05:35 compute-0 podman[217605]: 2025-12-03 18:05:35.476346241 +0000 UTC m=+0.403357340 container died 6eaa757dec9021db267d8691f1b731471a151cb29e5443919001059d2d880ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:35 compute-0 python3[217700]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 18:05:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b38d33e964b905da30531ebf8d54a19d709ce8b3fb04d0dccb43cac6abf5e6d4-merged.mount: Deactivated successfully.
Dec  3 18:05:35 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec  3 18:05:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:35 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec  3 18:05:36 compute-0 python3[217787]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764785135.1884744-37842-247930804882943/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=3c08993ec5e3d5368a201a3a43b02eb90e20f943 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:05:36 compute-0 podman[217605]: 2025-12-03 18:05:36.186521211 +0000 UTC m=+1.113532280 container remove 6eaa757dec9021db267d8691f1b731471a151cb29e5443919001059d2d880ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:36 compute-0 systemd[1]: libpod-conmon-6eaa757dec9021db267d8691f1b731471a151cb29e5443919001059d2d880ec6.scope: Deactivated successfully.
Dec  3 18:05:36 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec  3 18:05:36 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec  3 18:05:36 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec  3 18:05:36 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec  3 18:05:36 compute-0 podman[217819]: 2025-12-03 18:05:36.417626898 +0000 UTC m=+0.048774381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:36 compute-0 podman[217819]: 2025-12-03 18:05:36.621536437 +0000 UTC m=+0.252683860 container create 8cf8d361c09dbb356112e6e2449b6f105fdcb28149ed4fa38c1e17704c59ec89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:05:36 compute-0 python3[217858]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:36 compute-0 systemd[1]: Started libpod-conmon-8cf8d361c09dbb356112e6e2449b6f105fdcb28149ed4fa38c1e17704c59ec89.scope.
Dec  3 18:05:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74c101558cdc48756bd84b15408cf2358c0ed4c2979bad109831f286eb5561dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74c101558cdc48756bd84b15408cf2358c0ed4c2979bad109831f286eb5561dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74c101558cdc48756bd84b15408cf2358c0ed4c2979bad109831f286eb5561dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74c101558cdc48756bd84b15408cf2358c0ed4c2979bad109831f286eb5561dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:36 compute-0 podman[217859]: 2025-12-03 18:05:36.845095002 +0000 UTC m=+0.051392986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:36 compute-0 podman[217859]: 2025-12-03 18:05:36.975868319 +0000 UTC m=+0.182166263 container create f8adfa8a0304ad8d20d3205f27be60e7ae95785efc63e39860236c5a36ef9db9 (image=quay.io/ceph/ceph:v18, name=strange_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:37 compute-0 podman[217819]: 2025-12-03 18:05:37.043870577 +0000 UTC m=+0.675017980 container init 8cf8d361c09dbb356112e6e2449b6f105fdcb28149ed4fa38c1e17704c59ec89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 18:05:37 compute-0 podman[217819]: 2025-12-03 18:05:37.061046863 +0000 UTC m=+0.692194286 container start 8cf8d361c09dbb356112e6e2449b6f105fdcb28149ed4fa38c1e17704c59ec89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:37 compute-0 systemd[1]: Started libpod-conmon-f8adfa8a0304ad8d20d3205f27be60e7ae95785efc63e39860236c5a36ef9db9.scope.
Dec  3 18:05:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d9d02bc60535333b3f5541061a2f4ce2497a0c2a6c255e9673e74b112e3473f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d9d02bc60535333b3f5541061a2f4ce2497a0c2a6c255e9673e74b112e3473f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:37 compute-0 podman[217819]: 2025-12-03 18:05:37.272872273 +0000 UTC m=+0.904019666 container attach 8cf8d361c09dbb356112e6e2449b6f105fdcb28149ed4fa38c1e17704c59ec89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 18:05:37 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec  3 18:05:37 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec  3 18:05:37 compute-0 podman[217859]: 2025-12-03 18:05:37.437540461 +0000 UTC m=+0.643838445 container init f8adfa8a0304ad8d20d3205f27be60e7ae95785efc63e39860236c5a36ef9db9 (image=quay.io/ceph/ceph:v18, name=strange_tu, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:05:37 compute-0 podman[217859]: 2025-12-03 18:05:37.456290925 +0000 UTC m=+0.662588899 container start f8adfa8a0304ad8d20d3205f27be60e7ae95785efc63e39860236c5a36ef9db9 (image=quay.io/ceph/ceph:v18, name=strange_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:37 compute-0 podman[217859]: 2025-12-03 18:05:37.538269061 +0000 UTC m=+0.744567085 container attach f8adfa8a0304ad8d20d3205f27be60e7ae95785efc63e39860236c5a36ef9db9 (image=quay.io/ceph/ceph:v18, name=strange_tu, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:38 compute-0 interesting_burnell[217874]: {
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "osd_id": 1,
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "type": "bluestore"
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:    },
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "osd_id": 2,
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "type": "bluestore"
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:    },
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "osd_id": 0,
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:        "type": "bluestore"
Dec  3 18:05:38 compute-0 interesting_burnell[217874]:    }
Dec  3 18:05:38 compute-0 interesting_burnell[217874]: }
Dec  3 18:05:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Dec  3 18:05:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3707696525' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  3 18:05:38 compute-0 systemd[1]: libpod-8cf8d361c09dbb356112e6e2449b6f105fdcb28149ed4fa38c1e17704c59ec89.scope: Deactivated successfully.
Dec  3 18:05:38 compute-0 systemd[1]: libpod-8cf8d361c09dbb356112e6e2449b6f105fdcb28149ed4fa38c1e17704c59ec89.scope: Consumed 1.121s CPU time.
Dec  3 18:05:38 compute-0 conmon[217874]: conmon 8cf8d361c09dbb356112 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8cf8d361c09dbb356112e6e2449b6f105fdcb28149ed4fa38c1e17704c59ec89.scope/container/memory.events
Dec  3 18:05:38 compute-0 podman[217819]: 2025-12-03 18:05:38.185526487 +0000 UTC m=+1.816673930 container died 8cf8d361c09dbb356112e6e2449b6f105fdcb28149ed4fa38c1e17704c59ec89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:05:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3707696525' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  3 18:05:38 compute-0 systemd[1]: libpod-f8adfa8a0304ad8d20d3205f27be60e7ae95785efc63e39860236c5a36ef9db9.scope: Deactivated successfully.
Dec  3 18:05:38 compute-0 conmon[217881]: conmon f8adfa8a0304ad8d20d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8adfa8a0304ad8d20d3205f27be60e7ae95785efc63e39860236c5a36ef9db9.scope/container/memory.events
Dec  3 18:05:38 compute-0 podman[217859]: 2025-12-03 18:05:38.308326322 +0000 UTC m=+1.514624296 container died f8adfa8a0304ad8d20d3205f27be60e7ae95785efc63e39860236c5a36ef9db9 (image=quay.io/ceph/ceph:v18, name=strange_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-74c101558cdc48756bd84b15408cf2358c0ed4c2979bad109831f286eb5561dc-merged.mount: Deactivated successfully.
Dec  3 18:05:38 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3707696525' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  3 18:05:38 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/3707696525' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  3 18:05:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d9d02bc60535333b3f5541061a2f4ce2497a0c2a6c255e9673e74b112e3473f-merged.mount: Deactivated successfully.
Dec  3 18:05:38 compute-0 podman[217819]: 2025-12-03 18:05:38.971022332 +0000 UTC m=+2.602169745 container remove 8cf8d361c09dbb356112e6e2449b6f105fdcb28149ed4fa38c1e17704c59ec89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:38 compute-0 systemd[1]: libpod-conmon-8cf8d361c09dbb356112e6e2449b6f105fdcb28149ed4fa38c1e17704c59ec89.scope: Deactivated successfully.
Dec  3 18:05:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:05:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:05:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:39 compute-0 podman[217859]: 2025-12-03 18:05:39.173817914 +0000 UTC m=+2.380115848 container remove f8adfa8a0304ad8d20d3205f27be60e7ae95785efc63e39860236c5a36ef9db9 (image=quay.io/ceph/ceph:v18, name=strange_tu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:05:39 compute-0 systemd[1]: libpod-conmon-f8adfa8a0304ad8d20d3205f27be60e7ae95785efc63e39860236c5a36ef9db9.scope: Deactivated successfully.
Dec  3 18:05:39 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec  3 18:05:39 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec  3 18:05:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:39 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:39 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:39 compute-0 python3[218131]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:40 compute-0 podman[218143]: 2025-12-03 18:05:40.135133208 +0000 UTC m=+0.131316463 container create bd41d8175c2a0b7385af0e844f19e2c129eedec86e210bd22474e207d9f852ec (image=quay.io/ceph/ceph:v18, name=optimistic_knuth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:40 compute-0 podman[218143]: 2025-12-03 18:05:40.059108515 +0000 UTC m=+0.055291800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:40 compute-0 systemd[1]: Started libpod-conmon-bd41d8175c2a0b7385af0e844f19e2c129eedec86e210bd22474e207d9f852ec.scope.
Dec  3 18:05:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7046d23ec3786920dcf001896605cd0cfcfa448760f8d8df955894941a0ee48/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7046d23ec3786920dcf001896605cd0cfcfa448760f8d8df955894941a0ee48/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:40 compute-0 podman[218143]: 2025-12-03 18:05:40.478378891 +0000 UTC m=+0.474562226 container init bd41d8175c2a0b7385af0e844f19e2c129eedec86e210bd22474e207d9f852ec (image=quay.io/ceph/ceph:v18, name=optimistic_knuth, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:05:40 compute-0 podman[218143]: 2025-12-03 18:05:40.487592053 +0000 UTC m=+0.483775338 container start bd41d8175c2a0b7385af0e844f19e2c129eedec86e210bd22474e207d9f852ec (image=quay.io/ceph/ceph:v18, name=optimistic_knuth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:40 compute-0 podman[218143]: 2025-12-03 18:05:40.567873158 +0000 UTC m=+0.564056473 container attach bd41d8175c2a0b7385af0e844f19e2c129eedec86e210bd22474e207d9f852ec (image=quay.io/ceph/ceph:v18, name=optimistic_knuth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:05:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  3 18:05:41 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1645774712' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  3 18:05:41 compute-0 podman[218237]: 2025-12-03 18:05:41.13362006 +0000 UTC m=+0.233966657 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:41 compute-0 optimistic_knuth[218171]: 
Dec  3 18:05:41 compute-0 optimistic_knuth[218171]: {"fsid":"c1caf3ba-b2a5-5005-a11e-e955c344dccc","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":194,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1764785081,"num_in_osds":3,"osd_in_since":1764785045,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84176896,"bytes_avail":64327749632,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-12-03T18:05:37.798653+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec  3 18:05:41 compute-0 systemd[1]: libpod-bd41d8175c2a0b7385af0e844f19e2c129eedec86e210bd22474e207d9f852ec.scope: Deactivated successfully.
Dec  3 18:05:41 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec  3 18:05:41 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec  3 18:05:41 compute-0 podman[218143]: 2025-12-03 18:05:41.219416778 +0000 UTC m=+1.215600033 container died bd41d8175c2a0b7385af0e844f19e2c129eedec86e210bd22474e207d9f852ec (image=quay.io/ceph/ceph:v18, name=optimistic_knuth, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:41 compute-0 podman[218237]: 2025-12-03 18:05:41.434232291 +0000 UTC m=+0.534578908 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:05:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7046d23ec3786920dcf001896605cd0cfcfa448760f8d8df955894941a0ee48-merged.mount: Deactivated successfully.
Dec  3 18:05:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:41 compute-0 podman[218143]: 2025-12-03 18:05:41.870733463 +0000 UTC m=+1.866916718 container remove bd41d8175c2a0b7385af0e844f19e2c129eedec86e210bd22474e207d9f852ec (image=quay.io/ceph/ceph:v18, name=optimistic_knuth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:41 compute-0 systemd[1]: libpod-conmon-bd41d8175c2a0b7385af0e844f19e2c129eedec86e210bd22474e207d9f852ec.scope: Deactivated successfully.
Dec  3 18:05:42 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Dec  3 18:05:42 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Dec  3 18:05:42 compute-0 python3[218343]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:42 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec  3 18:05:42 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec  3 18:05:42 compute-0 podman[218361]: 2025-12-03 18:05:42.324077973 +0000 UTC m=+0.040758628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:42 compute-0 podman[218361]: 2025-12-03 18:05:42.44081833 +0000 UTC m=+0.157498955 container create 78e061e2adb95cbeb4d198ed1f025d3ccc4f456cd0266b9db8e601eb1edeb7b3 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:05:42 compute-0 systemd[1]: Started libpod-conmon-78e061e2adb95cbeb4d198ed1f025d3ccc4f456cd0266b9db8e601eb1edeb7b3.scope.
Dec  3 18:05:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:05:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49da3cf1d9482a84884c8cc15e821f9902fb132a223c498b15d96497807fe081/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49da3cf1d9482a84884c8cc15e821f9902fb132a223c498b15d96497807fe081/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:42 compute-0 podman[218361]: 2025-12-03 18:05:42.846967277 +0000 UTC m=+0.563647942 container init 78e061e2adb95cbeb4d198ed1f025d3ccc4f456cd0266b9db8e601eb1edeb7b3 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 18:05:42 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:05:42 compute-0 podman[218361]: 2025-12-03 18:05:42.879394093 +0000 UTC m=+0.596074738 container start 78e061e2adb95cbeb4d198ed1f025d3ccc4f456cd0266b9db8e601eb1edeb7b3 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 18:05:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:43 compute-0 podman[218361]: 2025-12-03 18:05:43.037033101 +0000 UTC m=+0.753713736 container attach 78e061e2adb95cbeb4d198ed1f025d3ccc4f456cd0266b9db8e601eb1edeb7b3 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:05:43 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000444240' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:05:43 compute-0 exciting_snyder[218407]: 
Dec  3 18:05:43 compute-0 exciting_snyder[218407]: {"epoch":1,"fsid":"c1caf3ba-b2a5-5005-a11e-e955c344dccc","modified":"2025-12-03T18:02:19.459889Z","created":"2025-12-03T18:02:19.459889Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec  3 18:05:43 compute-0 exciting_snyder[218407]: dumped monmap epoch 1
Dec  3 18:05:43 compute-0 systemd[1]: libpod-78e061e2adb95cbeb4d198ed1f025d3ccc4f456cd0266b9db8e601eb1edeb7b3.scope: Deactivated successfully.
Dec  3 18:05:43 compute-0 podman[218361]: 2025-12-03 18:05:43.508699594 +0000 UTC m=+1.225380259 container died 78e061e2adb95cbeb4d198ed1f025d3ccc4f456cd0266b9db8e601eb1edeb7b3 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:05:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-49da3cf1d9482a84884c8cc15e821f9902fb132a223c498b15d96497807fe081-merged.mount: Deactivated successfully.
Dec  3 18:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:05:44 compute-0 podman[218361]: 2025-12-03 18:05:44.017938468 +0000 UTC m=+1.734619093 container remove 78e061e2adb95cbeb4d198ed1f025d3ccc4f456cd0266b9db8e601eb1edeb7b3 (image=quay.io/ceph/ceph:v18, name=exciting_snyder, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:44 compute-0 systemd[1]: libpod-conmon-78e061e2adb95cbeb4d198ed1f025d3ccc4f456cd0266b9db8e601eb1edeb7b3.scope: Deactivated successfully.
Dec  3 18:05:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:05:44 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:05:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:05:44 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:05:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:05:44 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:44 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 20921496-960c-4797-9930-44a359f95302 does not exist
Dec  3 18:05:44 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b3674b50-a44e-4fa5-b508-6f603fca8b1f does not exist
Dec  3 18:05:44 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 98e71b6a-429d-4d1e-9926-52d7541cedd8 does not exist
Dec  3 18:05:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:05:44 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:05:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:05:44 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:05:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:05:44 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:05:44 compute-0 python3[218666]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:44 compute-0 podman[218699]: 2025-12-03 18:05:44.789945836 +0000 UTC m=+0.046420085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:44 compute-0 podman[218699]: 2025-12-03 18:05:44.922278332 +0000 UTC m=+0.178752561 container create 3e4862efb46d95ea7d2c411a7b2ec8ee43a9b7f40ee3c9fd5d3ca718a9fc1857 (image=quay.io/ceph/ceph:v18, name=strange_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:05:45 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:05:45 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:45 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:05:45 compute-0 systemd[1]: Started libpod-conmon-3e4862efb46d95ea7d2c411a7b2ec8ee43a9b7f40ee3c9fd5d3ca718a9fc1857.scope.
Dec  3 18:05:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b8e331c85ab8d106ad1e651cb9fa48a55166c6192bbbef3d691870f692409c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b8e331c85ab8d106ad1e651cb9fa48a55166c6192bbbef3d691870f692409c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:45 compute-0 podman[218699]: 2025-12-03 18:05:45.426063843 +0000 UTC m=+0.682538102 container init 3e4862efb46d95ea7d2c411a7b2ec8ee43a9b7f40ee3c9fd5d3ca718a9fc1857 (image=quay.io/ceph/ceph:v18, name=strange_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:45 compute-0 podman[218699]: 2025-12-03 18:05:45.439037658 +0000 UTC m=+0.695511917 container start 3e4862efb46d95ea7d2c411a7b2ec8ee43a9b7f40ee3c9fd5d3ca718a9fc1857 (image=quay.io/ceph/ceph:v18, name=strange_cartwright, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:05:45 compute-0 podman[218699]: 2025-12-03 18:05:45.534415338 +0000 UTC m=+0.790889647 container attach 3e4862efb46d95ea7d2c411a7b2ec8ee43a9b7f40ee3c9fd5d3ca718a9fc1857 (image=quay.io/ceph/ceph:v18, name=strange_cartwright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:05:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:45 compute-0 podman[218753]: 2025-12-03 18:05:45.714607312 +0000 UTC m=+0.038035883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:45 compute-0 podman[218753]: 2025-12-03 18:05:45.841928125 +0000 UTC m=+0.165356666 container create cb134fa15e14d2a9b5b0b29cfd28fcd6c94faf29f007b8f0f846cbc10c539ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:45 compute-0 systemd[1]: Started libpod-conmon-cb134fa15e14d2a9b5b0b29cfd28fcd6c94faf29f007b8f0f846cbc10c539ef2.scope.
Dec  3 18:05:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:46 compute-0 podman[218753]: 2025-12-03 18:05:46.086240643 +0000 UTC m=+0.409669264 container init cb134fa15e14d2a9b5b0b29cfd28fcd6c94faf29f007b8f0f846cbc10c539ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 18:05:46 compute-0 podman[218753]: 2025-12-03 18:05:46.095039096 +0000 UTC m=+0.418467637 container start cb134fa15e14d2a9b5b0b29cfd28fcd6c94faf29f007b8f0f846cbc10c539ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Dec  3 18:05:46 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4039142608' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  3 18:05:46 compute-0 strange_cartwright[218736]: [client.openstack]
Dec  3 18:05:46 compute-0 strange_cartwright[218736]: #011key = AQACezBpAAAAABAA4kVqiGBXa9CJeGAfqr4Yjw==
Dec  3 18:05:46 compute-0 strange_cartwright[218736]: #011caps mgr = "allow *"
Dec  3 18:05:46 compute-0 strange_cartwright[218736]: #011caps mon = "profile rbd"
Dec  3 18:05:46 compute-0 strange_cartwright[218736]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec  3 18:05:46 compute-0 boring_poitras[218788]: 167 167
Dec  3 18:05:46 compute-0 systemd[1]: libpod-cb134fa15e14d2a9b5b0b29cfd28fcd6c94faf29f007b8f0f846cbc10c539ef2.scope: Deactivated successfully.
Dec  3 18:05:46 compute-0 conmon[218788]: conmon cb134fa15e14d2a9b5b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb134fa15e14d2a9b5b0b29cfd28fcd6c94faf29f007b8f0f846cbc10c539ef2.scope/container/memory.events
Dec  3 18:05:46 compute-0 systemd[1]: libpod-3e4862efb46d95ea7d2c411a7b2ec8ee43a9b7f40ee3c9fd5d3ca718a9fc1857.scope: Deactivated successfully.
Dec  3 18:05:46 compute-0 podman[218753]: 2025-12-03 18:05:46.179600404 +0000 UTC m=+0.503028995 container attach cb134fa15e14d2a9b5b0b29cfd28fcd6c94faf29f007b8f0f846cbc10c539ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:05:46 compute-0 podman[218699]: 2025-12-03 18:05:46.180315751 +0000 UTC m=+1.436790020 container died 3e4862efb46d95ea7d2c411a7b2ec8ee43a9b7f40ee3c9fd5d3ca718a9fc1857 (image=quay.io/ceph/ceph:v18, name=strange_cartwright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:05:46 compute-0 podman[218753]: 2025-12-03 18:05:46.180308651 +0000 UTC m=+0.503737192 container died cb134fa15e14d2a9b5b0b29cfd28fcd6c94faf29f007b8f0f846cbc10c539ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec  3 18:05:46 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/4039142608' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  3 18:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-05b8e331c85ab8d106ad1e651cb9fa48a55166c6192bbbef3d691870f692409c-merged.mount: Deactivated successfully.
Dec  3 18:05:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-70b938527c7e6d66eb8c667e11d9dc7727a55645004193c9abe96329cbdaf692-merged.mount: Deactivated successfully.
Dec  3 18:05:46 compute-0 podman[218753]: 2025-12-03 18:05:46.945144495 +0000 UTC m=+1.268573076 container remove cb134fa15e14d2a9b5b0b29cfd28fcd6c94faf29f007b8f0f846cbc10c539ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:47 compute-0 podman[218801]: 2025-12-03 18:05:47.003020547 +0000 UTC m=+0.864199022 container remove 3e4862efb46d95ea7d2c411a7b2ec8ee43a9b7f40ee3c9fd5d3ca718a9fc1857 (image=quay.io/ceph/ceph:v18, name=strange_cartwright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 18:05:47 compute-0 systemd[1]: libpod-conmon-3e4862efb46d95ea7d2c411a7b2ec8ee43a9b7f40ee3c9fd5d3ca718a9fc1857.scope: Deactivated successfully.
Dec  3 18:05:47 compute-0 systemd[1]: libpod-conmon-cb134fa15e14d2a9b5b0b29cfd28fcd6c94faf29f007b8f0f846cbc10c539ef2.scope: Deactivated successfully.
Dec  3 18:05:47 compute-0 podman[218826]: 2025-12-03 18:05:47.205233232 +0000 UTC m=+0.092143178 container create aa6edb356db37795500b590c0ed61c7d8ef62f1e35abf58c3fcdc34c43c395f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:05:47 compute-0 podman[218826]: 2025-12-03 18:05:47.151545467 +0000 UTC m=+0.038455443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:47 compute-0 systemd[1]: Started libpod-conmon-aa6edb356db37795500b590c0ed61c7d8ef62f1e35abf58c3fcdc34c43c395f8.scope.
Dec  3 18:05:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e9acf322356785ba4abdc469f8e7b196db860e3cb9ca82150e8e249aefadc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e9acf322356785ba4abdc469f8e7b196db860e3cb9ca82150e8e249aefadc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e9acf322356785ba4abdc469f8e7b196db860e3cb9ca82150e8e249aefadc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e9acf322356785ba4abdc469f8e7b196db860e3cb9ca82150e8e249aefadc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e9acf322356785ba4abdc469f8e7b196db860e3cb9ca82150e8e249aefadc1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:47 compute-0 podman[218826]: 2025-12-03 18:05:47.41782029 +0000 UTC m=+0.304730306 container init aa6edb356db37795500b590c0ed61c7d8ef62f1e35abf58c3fcdc34c43c395f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:05:47 compute-0 podman[218826]: 2025-12-03 18:05:47.428214374 +0000 UTC m=+0.315124360 container start aa6edb356db37795500b590c0ed61c7d8ef62f1e35abf58c3fcdc34c43c395f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noyce, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:47 compute-0 podman[218826]: 2025-12-03 18:05:47.451947055 +0000 UTC m=+0.338857001 container attach aa6edb356db37795500b590c0ed61c7d8ef62f1e35abf58c3fcdc34c43c395f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noyce, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:05:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:48 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec  3 18:05:48 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec  3 18:05:48 compute-0 happy_noyce[218840]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:05:48 compute-0 happy_noyce[218840]: --> relative data size: 1.0
Dec  3 18:05:48 compute-0 happy_noyce[218840]: --> All data devices are unavailable
Dec  3 18:05:48 compute-0 systemd[1]: libpod-aa6edb356db37795500b590c0ed61c7d8ef62f1e35abf58c3fcdc34c43c395f8.scope: Deactivated successfully.
Dec  3 18:05:48 compute-0 systemd[1]: libpod-aa6edb356db37795500b590c0ed61c7d8ef62f1e35abf58c3fcdc34c43c395f8.scope: Consumed 1.051s CPU time.
Dec  3 18:05:48 compute-0 podman[218826]: 2025-12-03 18:05:48.562087028 +0000 UTC m=+1.448997024 container died aa6edb356db37795500b590c0ed61c7d8ef62f1e35abf58c3fcdc34c43c395f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:48 compute-0 ansible-async_wrapper.py[219017]: Invoked with j378768730868 30 /home/zuul/.ansible/tmp/ansible-tmp-1764785147.9779806-37918-9780646477950/AnsiballZ_command.py _
Dec  3 18:05:48 compute-0 ansible-async_wrapper.py[219032]: Starting module and watcher
Dec  3 18:05:48 compute-0 ansible-async_wrapper.py[219032]: Start watching 219033 (30)
Dec  3 18:05:48 compute-0 ansible-async_wrapper.py[219033]: Start module (219033)
Dec  3 18:05:48 compute-0 ansible-async_wrapper.py[219017]: Return async_wrapper task started.
Dec  3 18:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-10e9acf322356785ba4abdc469f8e7b196db860e3cb9ca82150e8e249aefadc1-merged.mount: Deactivated successfully.
Dec  3 18:05:48 compute-0 python3[219034]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:48 compute-0 podman[218826]: 2025-12-03 18:05:48.851572549 +0000 UTC m=+1.738482535 container remove aa6edb356db37795500b590c0ed61c7d8ef62f1e35abf58c3fcdc34c43c395f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noyce, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:05:48 compute-0 systemd[1]: libpod-conmon-aa6edb356db37795500b590c0ed61c7d8ef62f1e35abf58c3fcdc34c43c395f8.scope: Deactivated successfully.
Dec  3 18:05:48 compute-0 podman[219038]: 2025-12-03 18:05:48.944037144 +0000 UTC m=+0.078065694 container create 4b195fa1913eeb3d48110e352c0eb9f96c83f9c6d6a7715d9bc84927b50d3961 (image=quay.io/ceph/ceph:v18, name=condescending_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:49 compute-0 systemd[1]: Started libpod-conmon-4b195fa1913eeb3d48110e352c0eb9f96c83f9c6d6a7715d9bc84927b50d3961.scope.
Dec  3 18:05:49 compute-0 podman[219038]: 2025-12-03 18:05:48.91814685 +0000 UTC m=+0.052175400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9a20faa1c4d0cab85fcf3dcaa79cb4cb19d11debf51c67561cc1282fe2938e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9a20faa1c4d0cab85fcf3dcaa79cb4cb19d11debf51c67561cc1282fe2938e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:49 compute-0 podman[219038]: 2025-12-03 18:05:49.085378886 +0000 UTC m=+0.219407496 container init 4b195fa1913eeb3d48110e352c0eb9f96c83f9c6d6a7715d9bc84927b50d3961 (image=quay.io/ceph/ceph:v18, name=condescending_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:05:49 compute-0 podman[219038]: 2025-12-03 18:05:49.09413384 +0000 UTC m=+0.228162400 container start 4b195fa1913eeb3d48110e352c0eb9f96c83f9c6d6a7715d9bc84927b50d3961 (image=quay.io/ceph/ceph:v18, name=condescending_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:05:49 compute-0 podman[219038]: 2025-12-03 18:05:49.101836009 +0000 UTC m=+0.235864549 container attach 4b195fa1913eeb3d48110e352c0eb9f96c83f9c6d6a7715d9bc84927b50d3961 (image=quay.io/ceph/ceph:v18, name=condescending_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:49 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec  3 18:05:49 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec  3 18:05:49 compute-0 podman[219156]: 2025-12-03 18:05:49.376331473 +0000 UTC m=+0.102199695 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi)
Dec  3 18:05:49 compute-0 podman[219160]: 2025-12-03 18:05:49.382703418 +0000 UTC m=+0.098722418 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, name=ubi9, release-0.7.12=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30)
Dec  3 18:05:49 compute-0 podman[219157]: 2025-12-03 18:05:49.395303177 +0000 UTC m=+0.119635691 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 18:05:49 compute-0 podman[219159]: 2025-12-03 18:05:49.404166635 +0000 UTC m=+0.117367796 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Dec  3 18:05:49 compute-0 podman[219158]: 2025-12-03 18:05:49.420946256 +0000 UTC m=+0.143937837 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:05:49 compute-0 podman[219310]: 2025-12-03 18:05:49.661581169 +0000 UTC m=+0.071392889 container create 0e252e8628985d0b604a4fe712e8d33496c4f957cf2d6d23b4597f2c6b4121d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:49 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 18:05:49 compute-0 condescending_herschel[219080]: 
Dec  3 18:05:49 compute-0 condescending_herschel[219080]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 18:05:49 compute-0 podman[219310]: 2025-12-03 18:05:49.622236425 +0000 UTC m=+0.032048135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:49 compute-0 systemd[1]: Started libpod-conmon-0e252e8628985d0b604a4fe712e8d33496c4f957cf2d6d23b4597f2c6b4121d4.scope.
Dec  3 18:05:49 compute-0 systemd[1]: libpod-4b195fa1913eeb3d48110e352c0eb9f96c83f9c6d6a7715d9bc84927b50d3961.scope: Deactivated successfully.
Dec  3 18:05:49 compute-0 podman[219038]: 2025-12-03 18:05:49.72815734 +0000 UTC m=+0.862185870 container died 4b195fa1913eeb3d48110e352c0eb9f96c83f9c6d6a7715d9bc84927b50d3961 (image=quay.io/ceph/ceph:v18, name=condescending_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:05:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:49 compute-0 podman[219310]: 2025-12-03 18:05:49.783986808 +0000 UTC m=+0.193798528 container init 0e252e8628985d0b604a4fe712e8d33496c4f957cf2d6d23b4597f2c6b4121d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d9a20faa1c4d0cab85fcf3dcaa79cb4cb19d11debf51c67561cc1282fe2938e-merged.mount: Deactivated successfully.
Dec  3 18:05:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:49 compute-0 podman[219310]: 2025-12-03 18:05:49.807859393 +0000 UTC m=+0.217671083 container start 0e252e8628985d0b604a4fe712e8d33496c4f957cf2d6d23b4597f2c6b4121d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:05:49 compute-0 friendly_herschel[219333]: 167 167
Dec  3 18:05:49 compute-0 systemd[1]: libpod-0e252e8628985d0b604a4fe712e8d33496c4f957cf2d6d23b4597f2c6b4121d4.scope: Deactivated successfully.
Dec  3 18:05:49 compute-0 podman[219310]: 2025-12-03 18:05:49.81674452 +0000 UTC m=+0.226556240 container attach 0e252e8628985d0b604a4fe712e8d33496c4f957cf2d6d23b4597f2c6b4121d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:49 compute-0 podman[219310]: 2025-12-03 18:05:49.817230032 +0000 UTC m=+0.227041722 container died 0e252e8628985d0b604a4fe712e8d33496c4f957cf2d6d23b4597f2c6b4121d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:05:49 compute-0 podman[219038]: 2025-12-03 18:05:49.836137805 +0000 UTC m=+0.970166315 container remove 4b195fa1913eeb3d48110e352c0eb9f96c83f9c6d6a7715d9bc84927b50d3961 (image=quay.io/ceph/ceph:v18, name=condescending_herschel, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:49 compute-0 systemd[1]: libpod-conmon-4b195fa1913eeb3d48110e352c0eb9f96c83f9c6d6a7715d9bc84927b50d3961.scope: Deactivated successfully.
Dec  3 18:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-04525ae71df42bdbdbc6cfe6b3fb1f108269b1fade67137c8d585ce02ab8ebdf-merged.mount: Deactivated successfully.
Dec  3 18:05:49 compute-0 ansible-async_wrapper.py[219033]: Module complete (219033)
Dec  3 18:05:49 compute-0 podman[219310]: 2025-12-03 18:05:49.87961754 +0000 UTC m=+0.289429230 container remove 0e252e8628985d0b604a4fe712e8d33496c4f957cf2d6d23b4597f2c6b4121d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:05:49 compute-0 podman[219357]: 2025-12-03 18:05:49.885271788 +0000 UTC m=+0.099103658 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:05:49 compute-0 systemd[1]: libpod-conmon-0e252e8628985d0b604a4fe712e8d33496c4f957cf2d6d23b4597f2c6b4121d4.scope: Deactivated successfully.
Dec  3 18:05:50 compute-0 python3[219426]: ansible-ansible.legacy.async_status Invoked with jid=j378768730868.219017 mode=status _async_dir=/root/.ansible_async
Dec  3 18:05:50 compute-0 podman[219432]: 2025-12-03 18:05:50.091760017 +0000 UTC m=+0.077968601 container create a5f7ca537fab2254bd293f6ae2428599d2687cb9cfb40199043d729f973f4833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:05:50 compute-0 podman[219432]: 2025-12-03 18:05:50.061582617 +0000 UTC m=+0.047791231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:50 compute-0 systemd[1]: Started libpod-conmon-a5f7ca537fab2254bd293f6ae2428599d2687cb9cfb40199043d729f973f4833.scope.
Dec  3 18:05:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842346cb02bf803a87a999d0d32e21f47bf95ea2a1dfcb38dcbf61e5e2304b4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842346cb02bf803a87a999d0d32e21f47bf95ea2a1dfcb38dcbf61e5e2304b4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842346cb02bf803a87a999d0d32e21f47bf95ea2a1dfcb38dcbf61e5e2304b4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842346cb02bf803a87a999d0d32e21f47bf95ea2a1dfcb38dcbf61e5e2304b4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:50 compute-0 podman[219432]: 2025-12-03 18:05:50.38175242 +0000 UTC m=+0.367961024 container init a5f7ca537fab2254bd293f6ae2428599d2687cb9cfb40199043d729f973f4833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 18:05:50 compute-0 podman[219432]: 2025-12-03 18:05:50.39400362 +0000 UTC m=+0.380212204 container start a5f7ca537fab2254bd293f6ae2428599d2687cb9cfb40199043d729f973f4833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:50 compute-0 python3[219499]: ansible-ansible.legacy.async_status Invoked with jid=j378768730868.219017 mode=cleanup _async_dir=/root/.ansible_async
Dec  3 18:05:50 compute-0 podman[219432]: 2025-12-03 18:05:50.446304761 +0000 UTC m=+0.432513365 container attach a5f7ca537fab2254bd293f6ae2428599d2687cb9cfb40199043d729f973f4833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 18:05:51 compute-0 python3[219527]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:51 compute-0 recursing_hertz[219479]: {
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:    "0": [
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:        {
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "devices": [
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "/dev/loop3"
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            ],
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_name": "ceph_lv0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_size": "21470642176",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "name": "ceph_lv0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "tags": {
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.cluster_name": "ceph",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.crush_device_class": "",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.encrypted": "0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.osd_id": "0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.type": "block",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.vdo": "0"
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            },
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "type": "block",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "vg_name": "ceph_vg0"
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:        }
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:    ],
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:    "1": [
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:        {
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "devices": [
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "/dev/loop4"
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            ],
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_name": "ceph_lv1",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_size": "21470642176",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "name": "ceph_lv1",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "tags": {
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.cluster_name": "ceph",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.crush_device_class": "",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.encrypted": "0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.osd_id": "1",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.type": "block",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.vdo": "0"
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            },
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "type": "block",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "vg_name": "ceph_vg1"
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:        }
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:    ],
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:    "2": [
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:        {
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "devices": [
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "/dev/loop5"
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            ],
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_name": "ceph_lv2",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_size": "21470642176",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "name": "ceph_lv2",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "tags": {
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.cluster_name": "ceph",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.crush_device_class": "",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.encrypted": "0",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.osd_id": "2",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.type": "block",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:                "ceph.vdo": "0"
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            },
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "type": "block",
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:            "vg_name": "ceph_vg2"
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:        }
Dec  3 18:05:51 compute-0 recursing_hertz[219479]:    ]
Dec  3 18:05:51 compute-0 recursing_hertz[219479]: }
Dec  3 18:05:51 compute-0 systemd[1]: libpod-a5f7ca537fab2254bd293f6ae2428599d2687cb9cfb40199043d729f973f4833.scope: Deactivated successfully.
Dec  3 18:05:51 compute-0 podman[219432]: 2025-12-03 18:05:51.19621189 +0000 UTC m=+1.182420514 container died a5f7ca537fab2254bd293f6ae2428599d2687cb9cfb40199043d729f973f4833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:05:51 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec  3 18:05:51 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec  3 18:05:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-842346cb02bf803a87a999d0d32e21f47bf95ea2a1dfcb38dcbf61e5e2304b4d-merged.mount: Deactivated successfully.
Dec  3 18:05:51 compute-0 podman[219432]: 2025-12-03 18:05:51.41137699 +0000 UTC m=+1.397585584 container remove a5f7ca537fab2254bd293f6ae2428599d2687cb9cfb40199043d729f973f4833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:51 compute-0 systemd[1]: libpod-conmon-a5f7ca537fab2254bd293f6ae2428599d2687cb9cfb40199043d729f973f4833.scope: Deactivated successfully.
Dec  3 18:05:51 compute-0 podman[219532]: 2025-12-03 18:05:51.464242795 +0000 UTC m=+0.301245960 container create 6889f55444d383dce8aa479dab8b865eaf31257e87cf14dab18f441107f9b6d4 (image=quay.io/ceph/ceph:v18, name=gallant_shirley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 18:05:51 compute-0 systemd[1]: Started libpod-conmon-6889f55444d383dce8aa479dab8b865eaf31257e87cf14dab18f441107f9b6d4.scope.
Dec  3 18:05:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:51 compute-0 podman[219532]: 2025-12-03 18:05:51.443847355 +0000 UTC m=+0.280850540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28de96d10d1162d7c2efb3198ab30767e02b1d020e417c156d143c8649fbfc41/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28de96d10d1162d7c2efb3198ab30767e02b1d020e417c156d143c8649fbfc41/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:51 compute-0 podman[219532]: 2025-12-03 18:05:51.574424464 +0000 UTC m=+0.411427649 container init 6889f55444d383dce8aa479dab8b865eaf31257e87cf14dab18f441107f9b6d4 (image=quay.io/ceph/ceph:v18, name=gallant_shirley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:05:51 compute-0 podman[219532]: 2025-12-03 18:05:51.584823659 +0000 UTC m=+0.421826824 container start 6889f55444d383dce8aa479dab8b865eaf31257e87cf14dab18f441107f9b6d4 (image=quay.io/ceph/ceph:v18, name=gallant_shirley, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:05:51 compute-0 podman[219532]: 2025-12-03 18:05:51.590038617 +0000 UTC m=+0.427041822 container attach 6889f55444d383dce8aa479dab8b865eaf31257e87cf14dab18f441107f9b6d4 (image=quay.io/ceph/ceph:v18, name=gallant_shirley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:05:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v112: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:52 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 18:05:52 compute-0 gallant_shirley[219574]: 
Dec  3 18:05:52 compute-0 gallant_shirley[219574]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 18:05:52 compute-0 systemd[1]: libpod-6889f55444d383dce8aa479dab8b865eaf31257e87cf14dab18f441107f9b6d4.scope: Deactivated successfully.
Dec  3 18:05:52 compute-0 podman[219532]: 2025-12-03 18:05:52.200728025 +0000 UTC m=+1.037731200 container died 6889f55444d383dce8aa479dab8b865eaf31257e87cf14dab18f441107f9b6d4 (image=quay.io/ceph/ceph:v18, name=gallant_shirley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:05:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-28de96d10d1162d7c2efb3198ab30767e02b1d020e417c156d143c8649fbfc41-merged.mount: Deactivated successfully.
Dec  3 18:05:52 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec  3 18:05:52 compute-0 podman[219532]: 2025-12-03 18:05:52.271138229 +0000 UTC m=+1.108141394 container remove 6889f55444d383dce8aa479dab8b865eaf31257e87cf14dab18f441107f9b6d4 (image=quay.io/ceph/ceph:v18, name=gallant_shirley, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 18:05:52 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec  3 18:05:52 compute-0 systemd[1]: libpod-conmon-6889f55444d383dce8aa479dab8b865eaf31257e87cf14dab18f441107f9b6d4.scope: Deactivated successfully.
Dec  3 18:05:52 compute-0 podman[219720]: 2025-12-03 18:05:52.293239151 +0000 UTC m=+0.068714454 container create 6dd1841fff757dce8d46788b26026e0885f67a578ba0328c93f6f9799e522083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_spence, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:05:52 compute-0 systemd[1]: Started libpod-conmon-6dd1841fff757dce8d46788b26026e0885f67a578ba0328c93f6f9799e522083.scope.
Dec  3 18:05:52 compute-0 podman[219720]: 2025-12-03 18:05:52.266026395 +0000 UTC m=+0.041501728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:52 compute-0 podman[219720]: 2025-12-03 18:05:52.393143268 +0000 UTC m=+0.168618601 container init 6dd1841fff757dce8d46788b26026e0885f67a578ba0328c93f6f9799e522083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:05:52 compute-0 podman[219720]: 2025-12-03 18:05:52.401855692 +0000 UTC m=+0.177331005 container start 6dd1841fff757dce8d46788b26026e0885f67a578ba0328c93f6f9799e522083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_spence, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:52 compute-0 podman[219720]: 2025-12-03 18:05:52.407710105 +0000 UTC m=+0.183185448 container attach 6dd1841fff757dce8d46788b26026e0885f67a578ba0328c93f6f9799e522083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_spence, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 18:05:52 compute-0 happy_spence[219745]: 167 167
Dec  3 18:05:52 compute-0 systemd[1]: libpod-6dd1841fff757dce8d46788b26026e0885f67a578ba0328c93f6f9799e522083.scope: Deactivated successfully.
Dec  3 18:05:52 compute-0 podman[219720]: 2025-12-03 18:05:52.411702053 +0000 UTC m=+0.187177356 container died 6dd1841fff757dce8d46788b26026e0885f67a578ba0328c93f6f9799e522083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d6549ec703b9943b84f65ae1fcf8325b42f9e305acce70d9d658387aa8a8f19-merged.mount: Deactivated successfully.
Dec  3 18:05:52 compute-0 podman[219720]: 2025-12-03 18:05:52.472041511 +0000 UTC m=+0.247516854 container remove 6dd1841fff757dce8d46788b26026e0885f67a578ba0328c93f6f9799e522083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_spence, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:05:52 compute-0 systemd[1]: libpod-conmon-6dd1841fff757dce8d46788b26026e0885f67a578ba0328c93f6f9799e522083.scope: Deactivated successfully.
Dec  3 18:05:52 compute-0 podman[219769]: 2025-12-03 18:05:52.659685087 +0000 UTC m=+0.049405562 container create 6415134be6d0e53318189802032f1203b2312f3031f64dc8824232903fb1484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:52 compute-0 systemd[1]: Started libpod-conmon-6415134be6d0e53318189802032f1203b2312f3031f64dc8824232903fb1484a.scope.
Dec  3 18:05:52 compute-0 podman[219769]: 2025-12-03 18:05:52.6406339 +0000 UTC m=+0.030354405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce062f8c6f3769091d4023a4dbdb135da7b85bc63d97675096addf64d1dd5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce062f8c6f3769091d4023a4dbdb135da7b85bc63d97675096addf64d1dd5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce062f8c6f3769091d4023a4dbdb135da7b85bc63d97675096addf64d1dd5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce062f8c6f3769091d4023a4dbdb135da7b85bc63d97675096addf64d1dd5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:52 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec  3 18:05:52 compute-0 podman[219769]: 2025-12-03 18:05:52.782017743 +0000 UTC m=+0.171738248 container init 6415134be6d0e53318189802032f1203b2312f3031f64dc8824232903fb1484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:05:52 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec  3 18:05:52 compute-0 podman[219769]: 2025-12-03 18:05:52.803330715 +0000 UTC m=+0.193051230 container start 6415134be6d0e53318189802032f1203b2312f3031f64dc8824232903fb1484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:52 compute-0 podman[219769]: 2025-12-03 18:05:52.808551363 +0000 UTC m=+0.198271858 container attach 6415134be6d0e53318189802032f1203b2312f3031f64dc8824232903fb1484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:53 compute-0 python3[219814]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:53 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.1e deep-scrub starts
Dec  3 18:05:53 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.1e deep-scrub ok
Dec  3 18:05:53 compute-0 podman[219815]: 2025-12-03 18:05:53.243094567 +0000 UTC m=+0.033078231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:53 compute-0 podman[219815]: 2025-12-03 18:05:53.377670704 +0000 UTC m=+0.167654348 container create 5bddc97919637ddad625b428c1d3a060f391dadbe81bda48d3243eeb6ce42a82 (image=quay.io/ceph/ceph:v18, name=determined_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:53 compute-0 systemd[1]: Started libpod-conmon-5bddc97919637ddad625b428c1d3a060f391dadbe81bda48d3243eeb6ce42a82.scope.
Dec  3 18:05:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acccc55a8e77599014cea70c0ae0a0fc54668dc012749c564e1e8a5076d65967/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acccc55a8e77599014cea70c0ae0a0fc54668dc012749c564e1e8a5076d65967/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:53 compute-0 podman[219815]: 2025-12-03 18:05:53.474039794 +0000 UTC m=+0.264023428 container init 5bddc97919637ddad625b428c1d3a060f391dadbe81bda48d3243eeb6ce42a82 (image=quay.io/ceph/ceph:v18, name=determined_darwin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:05:53 compute-0 podman[219815]: 2025-12-03 18:05:53.481426095 +0000 UTC m=+0.271409739 container start 5bddc97919637ddad625b428c1d3a060f391dadbe81bda48d3243eeb6ce42a82 (image=quay.io/ceph/ceph:v18, name=determined_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 18:05:53 compute-0 podman[219815]: 2025-12-03 18:05:53.486250123 +0000 UTC m=+0.276233767 container attach 5bddc97919637ddad625b428c1d3a060f391dadbe81bda48d3243eeb6ce42a82 (image=quay.io/ceph/ceph:v18, name=determined_darwin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 18:05:53 compute-0 ansible-async_wrapper.py[219032]: Done in kid B.
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]: {
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "osd_id": 1,
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "type": "bluestore"
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:    },
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "osd_id": 2,
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "type": "bluestore"
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:    },
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "osd_id": 0,
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:        "type": "bluestore"
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]:    }
Dec  3 18:05:53 compute-0 compassionate_chaplygin[219784]: }
Dec  3 18:05:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v113: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:53 compute-0 systemd[1]: libpod-6415134be6d0e53318189802032f1203b2312f3031f64dc8824232903fb1484a.scope: Deactivated successfully.
Dec  3 18:05:53 compute-0 systemd[1]: libpod-6415134be6d0e53318189802032f1203b2312f3031f64dc8824232903fb1484a.scope: Consumed 1.023s CPU time.
Dec  3 18:05:53 compute-0 podman[219769]: 2025-12-03 18:05:53.839819564 +0000 UTC m=+1.229540039 container died 6415134be6d0e53318189802032f1203b2312f3031f64dc8824232903fb1484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-19ce062f8c6f3769091d4023a4dbdb135da7b85bc63d97675096addf64d1dd5c-merged.mount: Deactivated successfully.
Dec  3 18:05:53 compute-0 podman[219769]: 2025-12-03 18:05:53.90950146 +0000 UTC m=+1.299221925 container remove 6415134be6d0e53318189802032f1203b2312f3031f64dc8824232903fb1484a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec  3 18:05:53 compute-0 systemd[1]: libpod-conmon-6415134be6d0e53318189802032f1203b2312f3031f64dc8824232903fb1484a.scope: Deactivated successfully.
Dec  3 18:05:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:05:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:05:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:53 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev 261ba13f-1fc7-42b4-a463-f7a8bb925172 (Updating rgw.rgw deployment (+1 -> 1))
Dec  3 18:05:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pnhstw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Dec  3 18:05:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pnhstw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  3 18:05:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pnhstw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  3 18:05:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Dec  3 18:05:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:05:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:05:53 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.pnhstw on compute-0
Dec  3 18:05:53 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.pnhstw on compute-0
Dec  3 18:05:54 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 18:05:54 compute-0 determined_darwin[219829]: 
Dec  3 18:05:54 compute-0 determined_darwin[219829]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec  3 18:05:54 compute-0 systemd[1]: libpod-5bddc97919637ddad625b428c1d3a060f391dadbe81bda48d3243eeb6ce42a82.scope: Deactivated successfully.
Dec  3 18:05:54 compute-0 podman[219815]: 2025-12-03 18:05:54.101525875 +0000 UTC m=+0.891509619 container died 5bddc97919637ddad625b428c1d3a060f391dadbe81bda48d3243eeb6ce42a82 (image=quay.io/ceph/ceph:v18, name=determined_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 18:05:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-acccc55a8e77599014cea70c0ae0a0fc54668dc012749c564e1e8a5076d65967-merged.mount: Deactivated successfully.
Dec  3 18:05:54 compute-0 podman[219815]: 2025-12-03 18:05:54.169637912 +0000 UTC m=+0.959621556 container remove 5bddc97919637ddad625b428c1d3a060f391dadbe81bda48d3243eeb6ce42a82 (image=quay.io/ceph/ceph:v18, name=determined_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:05:54 compute-0 systemd[1]: libpod-conmon-5bddc97919637ddad625b428c1d3a060f391dadbe81bda48d3243eeb6ce42a82.scope: Deactivated successfully.
Dec  3 18:05:54 compute-0 podman[220044]: 2025-12-03 18:05:54.773867373 +0000 UTC m=+0.049237497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:55 compute-0 python3[220081]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:55 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec  3 18:05:55 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec  3 18:05:55 compute-0 podman[220044]: 2025-12-03 18:05:55.289991715 +0000 UTC m=+0.565361849 container create 1e2d76ebbc83926a3ab4cc43a621033e74cafde69b2bb8d0493e60e4dcd704c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 18:05:55 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec  3 18:05:55 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec  3 18:05:55 compute-0 podman[220082]: 2025-12-03 18:05:55.32407005 +0000 UTC m=+0.133788898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pnhstw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  3 18:05:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pnhstw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  3 18:05:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:55 compute-0 ceph-mon[192802]: Deploying daemon rgw.rgw.compute-0.pnhstw on compute-0
Dec  3 18:05:55 compute-0 podman[220082]: 2025-12-03 18:05:55.485065974 +0000 UTC m=+0.294784732 container create 0b49cf6c42d5da9a8fe7febaaac636f83db5e9ad329e539403a03b1c1aab3174 (image=quay.io/ceph/ceph:v18, name=exciting_tesla, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:05:55 compute-0 systemd[1]: Started libpod-conmon-1e2d76ebbc83926a3ab4cc43a621033e74cafde69b2bb8d0493e60e4dcd704c0.scope.
Dec  3 18:05:55 compute-0 systemd[1]: Started libpod-conmon-0b49cf6c42d5da9a8fe7febaaac636f83db5e9ad329e539403a03b1c1aab3174.scope.
Dec  3 18:05:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d399726bad405f9b7e906552b537a418c17a539e36b8a2b2a36b97a3ca4d832/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d399726bad405f9b7e906552b537a418c17a539e36b8a2b2a36b97a3ca4d832/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:55 compute-0 podman[220044]: 2025-12-03 18:05:55.587041532 +0000 UTC m=+0.862411716 container init 1e2d76ebbc83926a3ab4cc43a621033e74cafde69b2bb8d0493e60e4dcd704c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_volhard, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:55 compute-0 podman[220044]: 2025-12-03 18:05:55.599536017 +0000 UTC m=+0.874906111 container start 1e2d76ebbc83926a3ab4cc43a621033e74cafde69b2bb8d0493e60e4dcd704c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_volhard, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:55 compute-0 funny_volhard[220098]: 167 167
Dec  3 18:05:55 compute-0 systemd[1]: libpod-1e2d76ebbc83926a3ab4cc43a621033e74cafde69b2bb8d0493e60e4dcd704c0.scope: Deactivated successfully.
Dec  3 18:05:55 compute-0 podman[220082]: 2025-12-03 18:05:55.60739511 +0000 UTC m=+0.417113898 container init 0b49cf6c42d5da9a8fe7febaaac636f83db5e9ad329e539403a03b1c1aab3174 (image=quay.io/ceph/ceph:v18, name=exciting_tesla, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:55 compute-0 podman[220082]: 2025-12-03 18:05:55.61435466 +0000 UTC m=+0.424073418 container start 0b49cf6c42d5da9a8fe7febaaac636f83db5e9ad329e539403a03b1c1aab3174 (image=quay.io/ceph/ceph:v18, name=exciting_tesla, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:55 compute-0 podman[220044]: 2025-12-03 18:05:55.620521341 +0000 UTC m=+0.895891455 container attach 1e2d76ebbc83926a3ab4cc43a621033e74cafde69b2bb8d0493e60e4dcd704c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_volhard, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:55 compute-0 podman[220044]: 2025-12-03 18:05:55.621792712 +0000 UTC m=+0.897162796 container died 1e2d76ebbc83926a3ab4cc43a621033e74cafde69b2bb8d0493e60e4dcd704c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:05:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-561fd0573b665bb1797caec110ad10baf1db39b4a16e6bfbd4e438c690cca325-merged.mount: Deactivated successfully.
Dec  3 18:05:55 compute-0 podman[220082]: 2025-12-03 18:05:55.674123455 +0000 UTC m=+0.483842243 container attach 0b49cf6c42d5da9a8fe7febaaac636f83db5e9ad329e539403a03b1c1aab3174 (image=quay.io/ceph/ceph:v18, name=exciting_tesla, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 18:05:55 compute-0 podman[220044]: 2025-12-03 18:05:55.705879602 +0000 UTC m=+0.981249696 container remove 1e2d76ebbc83926a3ab4cc43a621033e74cafde69b2bb8d0493e60e4dcd704c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:05:55 compute-0 systemd[1]: libpod-conmon-1e2d76ebbc83926a3ab4cc43a621033e74cafde69b2bb8d0493e60e4dcd704c0.scope: Deactivated successfully.
Dec  3 18:05:55 compute-0 systemd[1]: Reloading.
Dec  3 18:05:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v114: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:55 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:05:55 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:05:56 compute-0 systemd[1]: Reloading.
Dec  3 18:05:56 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 18:05:56 compute-0 exciting_tesla[220102]: 
Dec  3 18:05:56 compute-0 exciting_tesla[220102]: [{"container_id": "9a4822c45260", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.37%", "created": "2025-12-03T18:03:48.274332Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-03T18:03:48.340742Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T18:05:42.645149Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-12-03T18:03:48.123693Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc@crash.compute-0", "version": "18.2.7"}, {"container_id": "6854398d0c07", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "26.10%", "created": "2025-12-03T18:02:29.158488Z", "daemon_id": "compute-0.etccde", "daemon_name": "mgr.compute-0.etccde", "daemon_type": "mgr", "events": ["2025-12-03T18:04:54.852832Z daemon:mgr.compute-0.etccde [INFO] \"Reconfigured mgr.compute-0.etccde on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T18:05:42.644978Z", "memory_usage": 550082969, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-03T18:02:28.980982Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc@mgr.compute-0.etccde", "version": "18.2.7"}, {"container_id": "c4418ca0ee5d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.86%", "created": "2025-12-03T18:02:22.512332Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-03T18:04:53.571694Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T18:05:42.644681Z", "memory_request": 2147483648, "memory_usage": 42708500, "ports": [], "service_name": "mon", "started": "2025-12-03T18:02:26.051635Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc@mon.compute-0", "version": "18.2.7"}, {"container_id": "3a6bbdaae9a7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.99%", "created": "2025-12-03T18:04:18.183027Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-03T18:04:18.400308Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T18:05:42.645329Z", "memory_request": 4294967296, "memory_usage": 66710405, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T18:04:18.029910Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc@osd.0", "version": "18.2.7"}, {"container_id": "831ecf787892", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.50%", "created": "2025-12-03T18:04:26.881229Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-03T18:04:26.995818Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T18:05:42.645659Z", "memory_request": 4294967296, "memory_usage": 67570237, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T18:04:26.602290Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc@osd.1", "version": "18.2.7"}, {"container_id": "9f3fd301463e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.31%", "created": "2025-12-03T18:04:33.254377Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-03T18:04:33.434261Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T18:05:42.645795Z", "memory_request": 4294967296, "memory_usage": 65672314, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T18:04:33.055290Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc@osd.2", "version": "18.2.7"}]
Dec  3 18:05:56 compute-0 podman[220082]: 2025-12-03 18:05:56.252414879 +0000 UTC m=+1.062133647 container died 0b49cf6c42d5da9a8fe7febaaac636f83db5e9ad329e539403a03b1c1aab3174 (image=quay.io/ceph/ceph:v18, name=exciting_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:56 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Dec  3 18:05:56 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Dec  3 18:05:56 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:05:56 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:05:56 compute-0 systemd[1]: libpod-0b49cf6c42d5da9a8fe7febaaac636f83db5e9ad329e539403a03b1c1aab3174.scope: Deactivated successfully.
Dec  3 18:05:56 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.pnhstw for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:05:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d399726bad405f9b7e906552b537a418c17a539e36b8a2b2a36b97a3ca4d832-merged.mount: Deactivated successfully.
Dec  3 18:05:56 compute-0 podman[220082]: 2025-12-03 18:05:56.702321699 +0000 UTC m=+1.512040457 container remove 0b49cf6c42d5da9a8fe7febaaac636f83db5e9ad329e539403a03b1c1aab3174 (image=quay.io/ceph/ceph:v18, name=exciting_tesla, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:56 compute-0 systemd[1]: libpod-conmon-0b49cf6c42d5da9a8fe7febaaac636f83db5e9ad329e539403a03b1c1aab3174.scope: Deactivated successfully.
Dec  3 18:05:56 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec  3 18:05:56 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec  3 18:05:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:05:56 compute-0 podman[220275]: 2025-12-03 18:05:56.970344895 +0000 UTC m=+0.052027546 container create 56b98747da49d3ed96139437cadb2e24637738785623eacaa21e40ec9afa0e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-rgw-rgw-compute-0-pnhstw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 18:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4f7b1726086973855a96d2e40a6c55800c146d9818488281904516de014133/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4f7b1726086973855a96d2e40a6c55800c146d9818488281904516de014133/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4f7b1726086973855a96d2e40a6c55800c146d9818488281904516de014133/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4f7b1726086973855a96d2e40a6c55800c146d9818488281904516de014133/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.pnhstw supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:57 compute-0 podman[220275]: 2025-12-03 18:05:56.949236097 +0000 UTC m=+0.030918828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:57 compute-0 podman[220275]: 2025-12-03 18:05:57.054546247 +0000 UTC m=+0.136228918 container init 56b98747da49d3ed96139437cadb2e24637738785623eacaa21e40ec9afa0e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-rgw-rgw-compute-0-pnhstw, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:05:57 compute-0 podman[220275]: 2025-12-03 18:05:57.067160687 +0000 UTC m=+0.148843338 container start 56b98747da49d3ed96139437cadb2e24637738785623eacaa21e40ec9afa0e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-rgw-rgw-compute-0-pnhstw, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:05:57 compute-0 bash[220275]: 56b98747da49d3ed96139437cadb2e24637738785623eacaa21e40ec9afa0e76
Dec  3 18:05:57 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.pnhstw for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:05:57 compute-0 radosgw[220294]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec  3 18:05:57 compute-0 radosgw[220294]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Dec  3 18:05:57 compute-0 radosgw[220294]: framework: beast
Dec  3 18:05:57 compute-0 radosgw[220294]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec  3 18:05:57 compute-0 radosgw[220294]: init_numa not setting numa affinity
Dec  3 18:05:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:05:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:05:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  3 18:05:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:57 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev 261ba13f-1fc7-42b4-a463-f7a8bb925172 (Updating rgw.rgw deployment (+1 -> 1))
Dec  3 18:05:57 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event 261ba13f-1fc7-42b4-a463-f7a8bb925172 (Updating rgw.rgw deployment (+1 -> 1)) in 3 seconds
Dec  3 18:05:57 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec  3 18:05:57 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec  3 18:05:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  3 18:05:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  3 18:05:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:57 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev a694fc5e-85f4-4423-b962-3e457d69a9ae (Updating mds.cephfs deployment (+1 -> 1))
Dec  3 18:05:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.oeacqo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Dec  3 18:05:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.oeacqo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  3 18:05:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.oeacqo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  3 18:05:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:05:57 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:05:57 compute-0 ceph-mgr[193091]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.oeacqo on compute-0
Dec  3 18:05:57 compute-0 ceph-mgr[193091]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.oeacqo on compute-0
Dec  3 18:05:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.oeacqo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  3 18:05:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.oeacqo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  3 18:05:57 compute-0 python3[220481]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:05:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v115: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:05:57 compute-0 podman[220515]: 2025-12-03 18:05:57.838764037 +0000 UTC m=+0.052530929 container create e5502c4bde1c842651ec6375410c3650a475e7dbe7ed03627369ffdb221814ad (image=quay.io/ceph/ceph:v18, name=epic_taussig, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 18:05:57 compute-0 podman[220531]: 2025-12-03 18:05:57.880737725 +0000 UTC m=+0.054215120 container create d21830f8f9976c4900912c55e890142351484b761db4071329ea26ce41b28360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 18:05:57 compute-0 systemd[1]: Started libpod-conmon-e5502c4bde1c842651ec6375410c3650a475e7dbe7ed03627369ffdb221814ad.scope.
Dec  3 18:05:57 compute-0 podman[220515]: 2025-12-03 18:05:57.8213946 +0000 UTC m=+0.035161472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:05:57 compute-0 systemd[1]: Started libpod-conmon-d21830f8f9976c4900912c55e890142351484b761db4071329ea26ce41b28360.scope.
Dec  3 18:05:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad6247a7334531198aa08aac6e7ac70fe9edbaabe29f93d1e59d97adcb523e2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad6247a7334531198aa08aac6e7ac70fe9edbaabe29f93d1e59d97adcb523e2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:05:57 compute-0 podman[220515]: 2025-12-03 18:05:57.953903667 +0000 UTC m=+0.167670569 container init e5502c4bde1c842651ec6375410c3650a475e7dbe7ed03627369ffdb221814ad (image=quay.io/ceph/ceph:v18, name=epic_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:57 compute-0 podman[220531]: 2025-12-03 18:05:57.861961794 +0000 UTC m=+0.035439219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:57 compute-0 podman[220531]: 2025-12-03 18:05:57.962830125 +0000 UTC m=+0.136307540 container init d21830f8f9976c4900912c55e890142351484b761db4071329ea26ce41b28360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:57 compute-0 podman[220515]: 2025-12-03 18:05:57.964540788 +0000 UTC m=+0.178307670 container start e5502c4bde1c842651ec6375410c3650a475e7dbe7ed03627369ffdb221814ad (image=quay.io/ceph/ceph:v18, name=epic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 18:05:57 compute-0 podman[220515]: 2025-12-03 18:05:57.968122365 +0000 UTC m=+0.181889247 container attach e5502c4bde1c842651ec6375410c3650a475e7dbe7ed03627369ffdb221814ad (image=quay.io/ceph/ceph:v18, name=epic_taussig, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:05:57 compute-0 podman[220531]: 2025-12-03 18:05:57.971960789 +0000 UTC m=+0.145438184 container start d21830f8f9976c4900912c55e890142351484b761db4071329ea26ce41b28360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 18:05:57 compute-0 podman[220531]: 2025-12-03 18:05:57.976035979 +0000 UTC m=+0.149513424 container attach d21830f8f9976c4900912c55e890142351484b761db4071329ea26ce41b28360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:05:57 compute-0 competent_poitras[220553]: 167 167
Dec  3 18:05:57 compute-0 systemd[1]: libpod-d21830f8f9976c4900912c55e890142351484b761db4071329ea26ce41b28360.scope: Deactivated successfully.
Dec  3 18:05:57 compute-0 podman[220531]: 2025-12-03 18:05:57.977926965 +0000 UTC m=+0.151404360 container died d21830f8f9976c4900912c55e890142351484b761db4071329ea26ce41b28360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-94c2cee86974afdf63bcbbf858187ec78295b17ca75f98ae0bb7634d2edd2394-merged.mount: Deactivated successfully.
Dec  3 18:05:58 compute-0 podman[220531]: 2025-12-03 18:05:58.022418135 +0000 UTC m=+0.195895520 container remove d21830f8f9976c4900912c55e890142351484b761db4071329ea26ce41b28360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:05:58 compute-0 systemd[1]: libpod-conmon-d21830f8f9976c4900912c55e890142351484b761db4071329ea26ce41b28360.scope: Deactivated successfully.
Dec  3 18:05:58 compute-0 systemd[1]: Reloading.
Dec  3 18:05:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec  3 18:05:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec  3 18:05:58 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec  3 18:05:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Dec  3 18:05:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  3 18:05:58 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:05:58 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:05:58 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 43 pg[8.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:05:58 compute-0 ceph-mon[192802]: Saving service rgw.rgw spec with placement compute-0
Dec  3 18:05:58 compute-0 ceph-mon[192802]: Deploying daemon mds.cephfs.compute-0.oeacqo on compute-0
Dec  3 18:05:58 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  3 18:05:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  3 18:05:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290528147' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  3 18:05:58 compute-0 epic_taussig[220548]: 
Dec  3 18:05:58 compute-0 epic_taussig[220548]: {"fsid":"c1caf3ba-b2a5-5005-a11e-e955c344dccc","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":212,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":43,"num_osds":3,"num_up_osds":3,"osd_up_since":1764785081,"num_in_osds":3,"osd_in_since":1764785045,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84168704,"bytes_avail":64327757824,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-12-03T18:05:43.801527+0000","services":{}},"progress_events":{"a694fc5e-85f4-4423-b962-3e457d69a9ae":{"message":"Updating mds.cephfs deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  3 18:05:58 compute-0 systemd[1]: Reloading.
Dec  3 18:05:58 compute-0 podman[220515]: 2025-12-03 18:05:58.627567848 +0000 UTC m=+0.841334750 container died e5502c4bde1c842651ec6375410c3650a475e7dbe7ed03627369ffdb221814ad (image=quay.io/ceph/ceph:v18, name=epic_taussig, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:05:58 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:05:58 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:05:59 compute-0 systemd[1]: libpod-e5502c4bde1c842651ec6375410c3650a475e7dbe7ed03627369ffdb221814ad.scope: Deactivated successfully.
Dec  3 18:05:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bad6247a7334531198aa08aac6e7ac70fe9edbaabe29f93d1e59d97adcb523e2-merged.mount: Deactivated successfully.
Dec  3 18:05:59 compute-0 podman[220515]: 2025-12-03 18:05:59.048634992 +0000 UTC m=+1.262401874 container remove e5502c4bde1c842651ec6375410c3650a475e7dbe7ed03627369ffdb221814ad (image=quay.io/ceph/ceph:v18, name=epic_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 18:05:59 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.oeacqo for c1caf3ba-b2a5-5005-a11e-e955c344dccc...
Dec  3 18:05:59 compute-0 systemd[1]: libpod-conmon-e5502c4bde1c842651ec6375410c3650a475e7dbe7ed03627369ffdb221814ad.scope: Deactivated successfully.
Dec  3 18:05:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec  3 18:05:59 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  3 18:05:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec  3 18:05:59 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec  3 18:05:59 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 44 pg[8.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:05:59 compute-0 ceph-mgr[193091]: [progress INFO root] Writing back 11 completed events
Dec  3 18:05:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 18:05:59 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:59 compute-0 ceph-mgr[193091]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec  3 18:05:59 compute-0 podman[220729]: 2025-12-03 18:05:59.446745153 +0000 UTC m=+0.071417670 container create 185f532fab0a605878bd84949b0d506db0213f758d7bc5e3ed2f921dd531981e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mds-cephfs-compute-0-oeacqo, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:05:59 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  3 18:05:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:59 compute-0 podman[220729]: 2025-12-03 18:05:59.420703276 +0000 UTC m=+0.045375813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053fa79134bc709b05e56760c0520a669ef05e06873ee37cfba1aaaac5c29ae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053fa79134bc709b05e56760c0520a669ef05e06873ee37cfba1aaaac5c29ae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053fa79134bc709b05e56760c0520a669ef05e06873ee37cfba1aaaac5c29ae9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053fa79134bc709b05e56760c0520a669ef05e06873ee37cfba1aaaac5c29ae9/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.oeacqo supports timestamps until 2038 (0x7fffffff)
Dec  3 18:05:59 compute-0 podman[220729]: 2025-12-03 18:05:59.588665 +0000 UTC m=+0.213337497 container init 185f532fab0a605878bd84949b0d506db0213f758d7bc5e3ed2f921dd531981e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mds-cephfs-compute-0-oeacqo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:05:59 compute-0 podman[220729]: 2025-12-03 18:05:59.603003101 +0000 UTC m=+0.227675578 container start 185f532fab0a605878bd84949b0d506db0213f758d7bc5e3ed2f921dd531981e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mds-cephfs-compute-0-oeacqo, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:05:59 compute-0 bash[220729]: 185f532fab0a605878bd84949b0d506db0213f758d7bc5e3ed2f921dd531981e
Dec  3 18:05:59 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.oeacqo for c1caf3ba-b2a5-5005-a11e-e955c344dccc.
Dec  3 18:05:59 compute-0 ceph-mds[220747]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 18:05:59 compute-0 ceph-mds[220747]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Dec  3 18:05:59 compute-0 ceph-mds[220747]: main not setting numa affinity
Dec  3 18:05:59 compute-0 ceph-mds[220747]: pidfile_write: ignore empty --pid-file
Dec  3 18:05:59 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mds-cephfs-compute-0-oeacqo[220743]: starting mds.cephfs.compute-0.oeacqo at 
Dec  3 18:05:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:05:59 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:05:59 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  3 18:05:59 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo Updating MDS map to version 2 from mon.0
Dec  3 18:05:59 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:59 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev a694fc5e-85f4-4423-b962-3e457d69a9ae (Updating mds.cephfs deployment (+1 -> 1))
Dec  3 18:05:59 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event a694fc5e-85f4-4423-b962-3e457d69a9ae (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Dec  3 18:05:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Dec  3 18:05:59 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  3 18:05:59 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:05:59 compute-0 podman[158200]: time="2025-12-03T18:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:05:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:05:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6774 "" "Go-http-client/1.1"
Dec  3 18:05:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v118: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:06:00 compute-0 python3[220842]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:06:00 compute-0 podman[220898]: 2025-12-03 18:06:00.11412649 +0000 UTC m=+0.047200067 container create b1d68533254c2ec44c46aa13a89d6a0f9cb8fa438a308ce699623fbec071616d (image=quay.io/ceph/ceph:v18, name=epic_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:06:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec  3 18:06:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec  3 18:06:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  3 18:06:00 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 45 pg[9.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:00 compute-0 systemd[1]: Started libpod-conmon-b1d68533254c2ec44c46aa13a89d6a0f9cb8fa438a308ce699623fbec071616d.scope.
Dec  3 18:06:00 compute-0 podman[220898]: 2025-12-03 18:06:00.096642452 +0000 UTC m=+0.029716049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:06:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222d96aec00f732da3e78962df46d65e5ac0f45bf28da3fc1607cd80553b1243/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222d96aec00f732da3e78962df46d65e5ac0f45bf28da3fc1607cd80553b1243/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:00 compute-0 podman[220898]: 2025-12-03 18:06:00.225715154 +0000 UTC m=+0.158788731 container init b1d68533254c2ec44c46aa13a89d6a0f9cb8fa438a308ce699623fbec071616d (image=quay.io/ceph/ceph:v18, name=epic_borg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:06:00 compute-0 podman[220898]: 2025-12-03 18:06:00.236508148 +0000 UTC m=+0.169581725 container start b1d68533254c2ec44c46aa13a89d6a0f9cb8fa438a308ce699623fbec071616d (image=quay.io/ceph/ceph:v18, name=epic_borg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:06:00 compute-0 podman[220898]: 2025-12-03 18:06:00.240558837 +0000 UTC m=+0.173632414 container attach b1d68533254c2ec44c46aa13a89d6a0f9cb8fa438a308ce699623fbec071616d (image=quay.io/ceph/ceph:v18, name=epic_borg, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 18:06:00 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec  3 18:06:00 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec  3 18:06:00 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:00 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:00 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:00 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:00 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:00 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  3 18:06:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e3 new map
Dec  3 18:06:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T18:05:31.930236+0000#012modified#0112025-12-03T18:05:31.930775+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.oeacqo{-1:14271} state up:standby seq 1 addr [v2:192.168.122.100:6814/793185457,v1:192.168.122.100:6815/793185457] compat {c=[1],r=[1],i=[7ff]}]
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo Updating MDS map to version 3 from mon.0
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo Monitors have assigned me to become a standby.
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/793185457,v1:192.168.122.100:6815/793185457] up:boot
Dec  3 18:06:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/793185457,v1:192.168.122.100:6815/793185457] as mds.0
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.oeacqo assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec  3 18:06:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.oeacqo"} v 0) v1
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.oeacqo"}]: dispatch
Dec  3 18:06:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e3 all = 0
Dec  3 18:06:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e4 new map
Dec  3 18:06:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T18:05:31.930236+0000#012modified#0112025-12-03T18:06:00.702570+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.oeacqo{0:14271} state up:creating seq 1 addr [v2:192.168.122.100:6814/793185457,v1:192.168.122.100:6815/793185457] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo Updating MDS map to version 4 from mon.0
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.oeacqo=up:creating}
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.4 handle_mds_map i am now mds.0.4
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x1
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x100
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x600
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x601
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x602
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x603
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x604
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x605
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x606
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x607
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x608
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.cache creating system inode with ino:0x609
Dec  3 18:06:00 compute-0 ceph-mds[220747]: mds.0.4 creating_done
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.oeacqo is now active in filesystem cephfs as rank 0
Dec  3 18:06:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 18:06:00 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4041003957' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 18:06:00 compute-0 epic_borg[220949]: 
Dec  3 18:06:00 compute-0 epic_borg[220949]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.pnhstw","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec  3 18:06:00 compute-0 systemd[1]: libpod-b1d68533254c2ec44c46aa13a89d6a0f9cb8fa438a308ce699623fbec071616d.scope: Deactivated successfully.
Dec  3 18:06:00 compute-0 podman[220898]: 2025-12-03 18:06:00.83384936 +0000 UTC m=+0.766922937 container died b1d68533254c2ec44c46aa13a89d6a0f9cb8fa438a308ce699623fbec071616d (image=quay.io/ceph/ceph:v18, name=epic_borg, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-222d96aec00f732da3e78962df46d65e5ac0f45bf28da3fc1607cd80553b1243-merged.mount: Deactivated successfully.
Dec  3 18:06:00 compute-0 podman[220898]: 2025-12-03 18:06:00.89550926 +0000 UTC m=+0.828582857 container remove b1d68533254c2ec44c46aa13a89d6a0f9cb8fa438a308ce699623fbec071616d (image=quay.io/ceph/ceph:v18, name=epic_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:00 compute-0 podman[221060]: 2025-12-03 18:06:00.899102487 +0000 UTC m=+0.089604195 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:06:00 compute-0 systemd[1]: libpod-conmon-b1d68533254c2ec44c46aa13a89d6a0f9cb8fa438a308ce699623fbec071616d.scope: Deactivated successfully.
Dec  3 18:06:00 compute-0 podman[221060]: 2025-12-03 18:06:00.992613968 +0000 UTC m=+0.183115676 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec  3 18:06:01 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 46 pg[9.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:01 compute-0 openstack_network_exporter[160319]: ERROR   18:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:06:01 compute-0 openstack_network_exporter[160319]: ERROR   18:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:06:01 compute-0 openstack_network_exporter[160319]: ERROR   18:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:06:01 compute-0 openstack_network_exporter[160319]: ERROR   18:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:06:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:06:01 compute-0 openstack_network_exporter[160319]: ERROR   18:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:06:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:06:01 compute-0 ceph-mon[192802]: daemon mds.cephfs.compute-0.oeacqo assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  3 18:06:01 compute-0 ceph-mon[192802]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  3 18:06:01 compute-0 ceph-mon[192802]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  3 18:06:01 compute-0 ceph-mon[192802]: Cluster is now healthy
Dec  3 18:06:01 compute-0 ceph-mon[192802]: daemon mds.cephfs.compute-0.oeacqo is now active in filesystem cephfs as rank 0
Dec  3 18:06:01 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e5 new map
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T18:05:31.930236+0000#012modified#0112025-12-03T18:06:01.724068+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.oeacqo{0:14271} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/793185457,v1:192.168.122.100:6815/793185457] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Dec  3 18:06:01 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo Updating MDS map to version 5 from mon.0
Dec  3 18:06:01 compute-0 ceph-mds[220747]: mds.0.4 handle_mds_map i am now mds.0.4
Dec  3 18:06:01 compute-0 ceph-mds[220747]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec  3 18:06:01 compute-0 ceph-mds[220747]: mds.0.4 recovery_done -- successful recovery!
Dec  3 18:06:01 compute-0 ceph-mds[220747]: mds.0.4 active_start
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/793185457,v1:192.168.122.100:6815/793185457] up:active
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.oeacqo=up:active}
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:01 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 0c8d5a3c-6bae-4c2e-af2a-1009ecc76eb0 does not exist
Dec  3 18:06:01 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev a4c23a9d-56f9-4725-8571-bbba189d20f3 does not exist
Dec  3 18:06:01 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3c39d69f-43a4-43c1-8849-ab724057a8a8 does not exist
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:06:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:06:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v121: 195 pgs: 195 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 4 op/s
Dec  3 18:06:01 compute-0 python3[221257]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:06:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:01 compute-0 podman[221284]: 2025-12-03 18:06:01.952673915 +0000 UTC m=+0.090081657 container create c49f257f08a12061429ed5aa18c9c39a45b0924cadfce5c3dac4ee6159917964 (image=quay.io/ceph/ceph:v18, name=loving_nightingale, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 18:06:01 compute-0 podman[221284]: 2025-12-03 18:06:01.90100275 +0000 UTC m=+0.038410502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:06:02 compute-0 systemd[1]: Started libpod-conmon-c49f257f08a12061429ed5aa18c9c39a45b0924cadfce5c3dac4ee6159917964.scope.
Dec  3 18:06:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592afecf32ba6448e7105f3d21231f196c319a63f81d6a8545c9084f517f95a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592afecf32ba6448e7105f3d21231f196c319a63f81d6a8545c9084f517f95a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:02 compute-0 podman[221284]: 2025-12-03 18:06:02.093164496 +0000 UTC m=+0.230572248 container init c49f257f08a12061429ed5aa18c9c39a45b0924cadfce5c3dac4ee6159917964 (image=quay.io/ceph/ceph:v18, name=loving_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:06:02 compute-0 podman[221284]: 2025-12-03 18:06:02.111434264 +0000 UTC m=+0.248841976 container start c49f257f08a12061429ed5aa18c9c39a45b0924cadfce5c3dac4ee6159917964 (image=quay.io/ceph/ceph:v18, name=loving_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 18:06:02 compute-0 podman[221284]: 2025-12-03 18:06:02.116091948 +0000 UTC m=+0.253499660 container attach c49f257f08a12061429ed5aa18c9c39a45b0924cadfce5c3dac4ee6159917964 (image=quay.io/ceph/ceph:v18, name=loving_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:06:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec  3 18:06:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec  3 18:06:02 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec  3 18:06:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Dec  3 18:06:02 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  3 18:06:02 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 47 pg[10.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [2] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:02 compute-0 podman[221437]: 2025-12-03 18:06:02.587869924 +0000 UTC m=+0.066204394 container create c20ed9e59da87abcfe6daf7d4a5b04cbeefcd83b3cd36a2246f0700782b1b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:06:02 compute-0 podman[221437]: 2025-12-03 18:06:02.551901712 +0000 UTC m=+0.030236272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:02 compute-0 systemd[1]: Started libpod-conmon-c20ed9e59da87abcfe6daf7d4a5b04cbeefcd83b3cd36a2246f0700782b1b598.scope.
Dec  3 18:06:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:02 compute-0 podman[221437]: 2025-12-03 18:06:02.70451816 +0000 UTC m=+0.182852670 container init c20ed9e59da87abcfe6daf7d4a5b04cbeefcd83b3cd36a2246f0700782b1b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:06:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Dec  3 18:06:02 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666972080' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec  3 18:06:02 compute-0 podman[221437]: 2025-12-03 18:06:02.717055318 +0000 UTC m=+0.195389788 container start c20ed9e59da87abcfe6daf7d4a5b04cbeefcd83b3cd36a2246f0700782b1b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 18:06:02 compute-0 loving_nightingale[221344]: mimic
Dec  3 18:06:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:06:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:06:02 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  3 18:06:02 compute-0 podman[221437]: 2025-12-03 18:06:02.725762411 +0000 UTC m=+0.204096911 container attach c20ed9e59da87abcfe6daf7d4a5b04cbeefcd83b3cd36a2246f0700782b1b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:06:02 compute-0 hungry_fermi[221453]: 167 167
Dec  3 18:06:02 compute-0 systemd[1]: libpod-c20ed9e59da87abcfe6daf7d4a5b04cbeefcd83b3cd36a2246f0700782b1b598.scope: Deactivated successfully.
Dec  3 18:06:02 compute-0 conmon[221453]: conmon c20ed9e59da87abcfe6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c20ed9e59da87abcfe6daf7d4a5b04cbeefcd83b3cd36a2246f0700782b1b598.scope/container/memory.events
Dec  3 18:06:02 compute-0 podman[221437]: 2025-12-03 18:06:02.736278569 +0000 UTC m=+0.214613059 container died c20ed9e59da87abcfe6daf7d4a5b04cbeefcd83b3cd36a2246f0700782b1b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:06:02 compute-0 systemd[1]: libpod-c49f257f08a12061429ed5aa18c9c39a45b0924cadfce5c3dac4ee6159917964.scope: Deactivated successfully.
Dec  3 18:06:02 compute-0 podman[221284]: 2025-12-03 18:06:02.749694177 +0000 UTC m=+0.887101909 container died c49f257f08a12061429ed5aa18c9c39a45b0924cadfce5c3dac4ee6159917964 (image=quay.io/ceph/ceph:v18, name=loving_nightingale, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 18:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5055c1057d63f8995b1f960b0e241a1b29b2b28c30d8d566d1dca3bb1793b6c2-merged.mount: Deactivated successfully.
Dec  3 18:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-592afecf32ba6448e7105f3d21231f196c319a63f81d6a8545c9084f517f95a3-merged.mount: Deactivated successfully.
Dec  3 18:06:02 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.8 deep-scrub starts
Dec  3 18:06:02 compute-0 podman[221437]: 2025-12-03 18:06:02.857954869 +0000 UTC m=+0.336289339 container remove c20ed9e59da87abcfe6daf7d4a5b04cbeefcd83b3cd36a2246f0700782b1b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:06:02 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.8 deep-scrub ok
Dec  3 18:06:02 compute-0 systemd[1]: libpod-conmon-c20ed9e59da87abcfe6daf7d4a5b04cbeefcd83b3cd36a2246f0700782b1b598.scope: Deactivated successfully.
Dec  3 18:06:02 compute-0 podman[221284]: 2025-12-03 18:06:02.874102044 +0000 UTC m=+1.011509756 container remove c49f257f08a12061429ed5aa18c9c39a45b0924cadfce5c3dac4ee6159917964 (image=quay.io/ceph/ceph:v18, name=loving_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:06:02 compute-0 systemd[1]: libpod-conmon-c49f257f08a12061429ed5aa18c9c39a45b0924cadfce5c3dac4ee6159917964.scope: Deactivated successfully.
Dec  3 18:06:03 compute-0 podman[221489]: 2025-12-03 18:06:03.07680423 +0000 UTC m=+0.057165662 container create e918f8afb68d1b9e14c28f6ffa86956c2cab371cdef829e71ee4c52e724db7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_visvesvaraya, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:06:03 compute-0 systemd[1]: Started libpod-conmon-e918f8afb68d1b9e14c28f6ffa86956c2cab371cdef829e71ee4c52e724db7cc.scope.
Dec  3 18:06:03 compute-0 podman[221489]: 2025-12-03 18:06:03.054554415 +0000 UTC m=+0.034915877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec  3 18:06:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  3 18:06:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec  3 18:06:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:03 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec  3 18:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6822a20c018d9e7b35e3d27466907f3dea69476bd24e28134da5c27298b016/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6822a20c018d9e7b35e3d27466907f3dea69476bd24e28134da5c27298b016/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6822a20c018d9e7b35e3d27466907f3dea69476bd24e28134da5c27298b016/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6822a20c018d9e7b35e3d27466907f3dea69476bd24e28134da5c27298b016/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6822a20c018d9e7b35e3d27466907f3dea69476bd24e28134da5c27298b016/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:03 compute-0 podman[221489]: 2025-12-03 18:06:03.198871919 +0000 UTC m=+0.179233351 container init e918f8afb68d1b9e14c28f6ffa86956c2cab371cdef829e71ee4c52e724db7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:06:03 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 48 pg[10.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [2] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:03 compute-0 podman[221489]: 2025-12-03 18:06:03.215959428 +0000 UTC m=+0.196320850 container start e918f8afb68d1b9e14c28f6ffa86956c2cab371cdef829e71ee4c52e724db7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:06:03 compute-0 podman[221489]: 2025-12-03 18:06:03.221670678 +0000 UTC m=+0.202032090 container attach e918f8afb68d1b9e14c28f6ffa86956c2cab371cdef829e71ee4c52e724db7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_visvesvaraya, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:06:03 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec  3 18:06:03 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec  3 18:06:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v124: 196 pgs: 1 creating+peering, 195 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 9 op/s
Dec  3 18:06:03 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Dec  3 18:06:03 compute-0 python3[221549]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:06:03 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Dec  3 18:06:03 compute-0 podman[221552]: 2025-12-03 18:06:03.9451655 +0000 UTC m=+0.048927510 container create c2aa25ceb255dcbd92f31a7d52baabbbfeb888ca8ce0831dd09395115936ced2 (image=quay.io/ceph/ceph:v18, name=laughing_napier, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 18:06:03 compute-0 systemd[1]: Started libpod-conmon-c2aa25ceb255dcbd92f31a7d52baabbbfeb888ca8ce0831dd09395115936ced2.scope.
Dec  3 18:06:04 compute-0 podman[221552]: 2025-12-03 18:06:03.927340863 +0000 UTC m=+0.031102893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:06:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd892ffba80c7920897fb32061faa673b53122a8451627d1247ba345cfbf506e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd892ffba80c7920897fb32061faa673b53122a8451627d1247ba345cfbf506e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:04 compute-0 podman[221552]: 2025-12-03 18:06:04.066947023 +0000 UTC m=+0.170709053 container init c2aa25ceb255dcbd92f31a7d52baabbbfeb888ca8ce0831dd09395115936ced2 (image=quay.io/ceph/ceph:v18, name=laughing_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:06:04 compute-0 podman[221552]: 2025-12-03 18:06:04.074930419 +0000 UTC m=+0.178692429 container start c2aa25ceb255dcbd92f31a7d52baabbbfeb888ca8ce0831dd09395115936ced2 (image=quay.io/ceph/ceph:v18, name=laughing_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:06:04 compute-0 podman[221552]: 2025-12-03 18:06:04.078593658 +0000 UTC m=+0.182355668 container attach c2aa25ceb255dcbd92f31a7d52baabbbfeb888ca8ce0831dd09395115936ced2 (image=quay.io/ceph/ceph:v18, name=laughing_napier, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:06:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec  3 18:06:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec  3 18:06:04 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/654959158' entity='client.rgw.rgw.compute-0.pnhstw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  3 18:06:04 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec  3 18:06:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Dec  3 18:06:04 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1218343518' entity='client.rgw.rgw.compute-0.pnhstw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  3 18:06:04 compute-0 ceph-mgr[193091]: [progress INFO root] Writing back 12 completed events
Dec  3 18:06:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 18:06:04 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:04 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event 55e88e6b-35b2-4dc4-8d50-1db38169806f (Global Recovery Event) in 5 seconds
Dec  3 18:06:04 compute-0 nostalgic_visvesvaraya[221506]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:06:04 compute-0 nostalgic_visvesvaraya[221506]: --> relative data size: 1.0
Dec  3 18:06:04 compute-0 nostalgic_visvesvaraya[221506]: --> All data devices are unavailable
Dec  3 18:06:04 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec  3 18:06:04 compute-0 systemd[1]: libpod-e918f8afb68d1b9e14c28f6ffa86956c2cab371cdef829e71ee4c52e724db7cc.scope: Deactivated successfully.
Dec  3 18:06:04 compute-0 systemd[1]: libpod-e918f8afb68d1b9e14c28f6ffa86956c2cab371cdef829e71ee4c52e724db7cc.scope: Consumed 1.037s CPU time.
Dec  3 18:06:04 compute-0 conmon[221506]: conmon e918f8afb68d1b9e14c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e918f8afb68d1b9e14c28f6ffa86956c2cab371cdef829e71ee4c52e724db7cc.scope/container/memory.events
Dec  3 18:06:04 compute-0 podman[221489]: 2025-12-03 18:06:04.3579102 +0000 UTC m=+1.338271622 container died e918f8afb68d1b9e14c28f6ffa86956c2cab371cdef829e71ee4c52e724db7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_visvesvaraya, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:06:04 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:04 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec  3 18:06:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d6822a20c018d9e7b35e3d27466907f3dea69476bd24e28134da5c27298b016-merged.mount: Deactivated successfully.
Dec  3 18:06:04 compute-0 podman[221489]: 2025-12-03 18:06:04.443992188 +0000 UTC m=+1.424353620 container remove e918f8afb68d1b9e14c28f6ffa86956c2cab371cdef829e71ee4c52e724db7cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:06:04 compute-0 systemd[1]: libpod-conmon-e918f8afb68d1b9e14c28f6ffa86956c2cab371cdef829e71ee4c52e724db7cc.scope: Deactivated successfully.
Dec  3 18:06:04 compute-0 podman[221620]: 2025-12-03 18:06:04.592982738 +0000 UTC m=+0.106822148 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:06:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Dec  3 18:06:04 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626482310' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec  3 18:06:04 compute-0 laughing_napier[221570]: 
Dec  3 18:06:04 compute-0 systemd[1]: libpod-c2aa25ceb255dcbd92f31a7d52baabbbfeb888ca8ce0831dd09395115936ced2.scope: Deactivated successfully.
Dec  3 18:06:04 compute-0 laughing_napier[221570]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Dec  3 18:06:04 compute-0 podman[221552]: 2025-12-03 18:06:04.78293697 +0000 UTC m=+0.886698980 container died c2aa25ceb255dcbd92f31a7d52baabbbfeb888ca8ce0831dd09395115936ced2 (image=quay.io/ceph/ceph:v18, name=laughing_napier, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:06:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd892ffba80c7920897fb32061faa673b53122a8451627d1247ba345cfbf506e-merged.mount: Deactivated successfully.
Dec  3 18:06:04 compute-0 podman[221552]: 2025-12-03 18:06:04.861111475 +0000 UTC m=+0.964873485 container remove c2aa25ceb255dcbd92f31a7d52baabbbfeb888ca8ce0831dd09395115936ced2 (image=quay.io/ceph/ceph:v18, name=laughing_napier, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:04 compute-0 systemd[1]: libpod-conmon-c2aa25ceb255dcbd92f31a7d52baabbbfeb888ca8ce0831dd09395115936ced2.scope: Deactivated successfully.
Dec  3 18:06:05 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec  3 18:06:05 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec  3 18:06:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec  3 18:06:05 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1218343518' entity='client.rgw.rgw.compute-0.pnhstw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  3 18:06:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec  3 18:06:05 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec  3 18:06:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Dec  3 18:06:05 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1218343518' entity='client.rgw.rgw.compute-0.pnhstw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  3 18:06:05 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1218343518' entity='client.rgw.rgw.compute-0.pnhstw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  3 18:06:05 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:05 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:05 compute-0 podman[221800]: 2025-12-03 18:06:05.331831595 +0000 UTC m=+0.078311239 container create 2168f64489f84c3d53b9350a4f0fb94ae5283abd5d5110491c168895a6cf189d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 18:06:05 compute-0 systemd[1]: Started libpod-conmon-2168f64489f84c3d53b9350a4f0fb94ae5283abd5d5110491c168895a6cf189d.scope.
Dec  3 18:06:05 compute-0 podman[221800]: 2025-12-03 18:06:05.301735848 +0000 UTC m=+0.048215532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:05 compute-0 podman[221800]: 2025-12-03 18:06:05.458900598 +0000 UTC m=+0.205380212 container init 2168f64489f84c3d53b9350a4f0fb94ae5283abd5d5110491c168895a6cf189d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:06:05 compute-0 podman[221800]: 2025-12-03 18:06:05.469967659 +0000 UTC m=+0.216447263 container start 2168f64489f84c3d53b9350a4f0fb94ae5283abd5d5110491c168895a6cf189d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:06:05 compute-0 podman[221800]: 2025-12-03 18:06:05.474735465 +0000 UTC m=+0.221215069 container attach 2168f64489f84c3d53b9350a4f0fb94ae5283abd5d5110491c168895a6cf189d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:06:05 compute-0 bold_khayyam[221815]: 167 167
Dec  3 18:06:05 compute-0 systemd[1]: libpod-2168f64489f84c3d53b9350a4f0fb94ae5283abd5d5110491c168895a6cf189d.scope: Deactivated successfully.
Dec  3 18:06:05 compute-0 podman[221800]: 2025-12-03 18:06:05.478004996 +0000 UTC m=+0.224484610 container died 2168f64489f84c3d53b9350a4f0fb94ae5283abd5d5110491c168895a6cf189d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-443228003f6b4c7d63d519536cf9d235ffd0d4fe7fd3d675bcc314db3715beca-merged.mount: Deactivated successfully.
Dec  3 18:06:05 compute-0 podman[221800]: 2025-12-03 18:06:05.53736424 +0000 UTC m=+0.283843854 container remove 2168f64489f84c3d53b9350a4f0fb94ae5283abd5d5110491c168895a6cf189d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:06:05 compute-0 systemd[1]: libpod-conmon-2168f64489f84c3d53b9350a4f0fb94ae5283abd5d5110491c168895a6cf189d.scope: Deactivated successfully.
Dec  3 18:06:05 compute-0 podman[221838]: 2025-12-03 18:06:05.737365399 +0000 UTC m=+0.078386621 container create 38b44240c87796ce214796b4e95e86b31c8bcd9abf461881ec5dab4d5dc0044b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:06:05 compute-0 podman[221838]: 2025-12-03 18:06:05.707020985 +0000 UTC m=+0.048042247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:05 compute-0 systemd[1]: Started libpod-conmon-38b44240c87796ce214796b4e95e86b31c8bcd9abf461881ec5dab4d5dc0044b.scope.
Dec  3 18:06:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 1 unknown, 1 creating+peering, 195 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s wr, 8 op/s
Dec  3 18:06:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a313277a64ef31897ca49b2bd323e308ac3618542a28eed404b6d25017c8a660/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a313277a64ef31897ca49b2bd323e308ac3618542a28eed404b6d25017c8a660/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a313277a64ef31897ca49b2bd323e308ac3618542a28eed404b6d25017c8a660/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a313277a64ef31897ca49b2bd323e308ac3618542a28eed404b6d25017c8a660/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:05 compute-0 podman[221838]: 2025-12-03 18:06:05.872188231 +0000 UTC m=+0.213209423 container init 38b44240c87796ce214796b4e95e86b31c8bcd9abf461881ec5dab4d5dc0044b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:06:05 compute-0 podman[221838]: 2025-12-03 18:06:05.924335529 +0000 UTC m=+0.265356731 container start 38b44240c87796ce214796b4e95e86b31c8bcd9abf461881ec5dab4d5dc0044b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:05 compute-0 podman[221838]: 2025-12-03 18:06:05.928397537 +0000 UTC m=+0.269418759 container attach 38b44240c87796ce214796b4e95e86b31c8bcd9abf461881ec5dab4d5dc0044b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:06:06 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec  3 18:06:06 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec  3 18:06:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec  3 18:06:06 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1218343518' entity='client.rgw.rgw.compute-0.pnhstw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  3 18:06:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec  3 18:06:06 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec  3 18:06:06 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1218343518' entity='client.rgw.rgw.compute-0.pnhstw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  3 18:06:06 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1218343518' entity='client.rgw.rgw.compute-0.pnhstw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  3 18:06:06 compute-0 ceph-mon[192802]: from='client.? 192.168.122.100:0/1218343518' entity='client.rgw.rgw.compute-0.pnhstw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  3 18:06:06 compute-0 radosgw[220294]: LDAP not started since no server URIs were provided in the configuration.
Dec  3 18:06:06 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-rgw-rgw-compute-0-pnhstw[220290]: 2025-12-03T18:06:06.443+0000 7fefc55df940 -1 LDAP not started since no server URIs were provided in the configuration.
Dec  3 18:06:06 compute-0 radosgw[220294]: framework: beast
Dec  3 18:06:06 compute-0 radosgw[220294]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec  3 18:06:06 compute-0 radosgw[220294]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec  3 18:06:06 compute-0 radosgw[220294]: starting handler: beast
Dec  3 18:06:06 compute-0 radosgw[220294]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 18:06:06 compute-0 radosgw[220294]: mgrc service_daemon_register rgw.14277 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.pnhstw,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=3e3bee26-652d-4ff2-98e4-325ab7fbc3aa,zone_name=default,zonegroup_id=f3f416b8-1a59-4563-a4ab-c1c6455d3de7,zonegroup_name=default}
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]: {
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:    "0": [
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:        {
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "devices": [
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "/dev/loop3"
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            ],
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_name": "ceph_lv0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_size": "21470642176",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "name": "ceph_lv0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "tags": {
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.cluster_name": "ceph",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.crush_device_class": "",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.encrypted": "0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.osd_id": "0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.type": "block",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.vdo": "0"
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            },
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "type": "block",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "vg_name": "ceph_vg0"
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:        }
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:    ],
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:    "1": [
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:        {
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "devices": [
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "/dev/loop4"
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            ],
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_name": "ceph_lv1",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_size": "21470642176",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "name": "ceph_lv1",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "tags": {
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.cluster_name": "ceph",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.crush_device_class": "",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.encrypted": "0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.osd_id": "1",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.type": "block",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.vdo": "0"
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            },
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "type": "block",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "vg_name": "ceph_vg1"
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:        }
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:    ],
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:    "2": [
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:        {
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "devices": [
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "/dev/loop5"
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            ],
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_name": "ceph_lv2",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_size": "21470642176",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "name": "ceph_lv2",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "tags": {
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.cluster_name": "ceph",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.crush_device_class": "",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.encrypted": "0",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.osd_id": "2",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.type": "block",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:                "ceph.vdo": "0"
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            },
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "type": "block",
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:            "vg_name": "ceph_vg2"
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:        }
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]:    ]
Dec  3 18:06:06 compute-0 thirsty_babbage[221853]: }
Dec  3 18:06:06 compute-0 systemd[1]: libpod-38b44240c87796ce214796b4e95e86b31c8bcd9abf461881ec5dab4d5dc0044b.scope: Deactivated successfully.
Dec  3 18:06:06 compute-0 podman[221838]: 2025-12-03 18:06:06.763896783 +0000 UTC m=+1.104917975 container died 38b44240c87796ce214796b4e95e86b31c8bcd9abf461881ec5dab4d5dc0044b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a313277a64ef31897ca49b2bd323e308ac3618542a28eed404b6d25017c8a660-merged.mount: Deactivated successfully.
Dec  3 18:06:06 compute-0 podman[221838]: 2025-12-03 18:06:06.832992546 +0000 UTC m=+1.174013738 container remove 38b44240c87796ce214796b4e95e86b31c8bcd9abf461881ec5dab4d5dc0044b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:06:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:06 compute-0 systemd[1]: libpod-conmon-38b44240c87796ce214796b4e95e86b31c8bcd9abf461881ec5dab4d5dc0044b.scope: Deactivated successfully.
Dec  3 18:06:06 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec  3 18:06:06 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec  3 18:06:07 compute-0 podman[222555]: 2025-12-03 18:06:07.603382066 +0000 UTC m=+0.052000905 container create 99d843ab77a8336d04b954de0cb7c536fed7ab4d2c7a0b955105fee0bf2e558b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:07 compute-0 systemd[1]: Started libpod-conmon-99d843ab77a8336d04b954de0cb7c536fed7ab4d2c7a0b955105fee0bf2e558b.scope.
Dec  3 18:06:07 compute-0 podman[222555]: 2025-12-03 18:06:07.577879522 +0000 UTC m=+0.026498361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:07 compute-0 podman[222555]: 2025-12-03 18:06:07.71417334 +0000 UTC m=+0.162792169 container init 99d843ab77a8336d04b954de0cb7c536fed7ab4d2c7a0b955105fee0bf2e558b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 18:06:07 compute-0 podman[222555]: 2025-12-03 18:06:07.723129349 +0000 UTC m=+0.171748158 container start 99d843ab77a8336d04b954de0cb7c536fed7ab4d2c7a0b955105fee0bf2e558b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:06:07 compute-0 podman[222555]: 2025-12-03 18:06:07.726788579 +0000 UTC m=+0.175407418 container attach 99d843ab77a8336d04b954de0cb7c536fed7ab4d2c7a0b955105fee0bf2e558b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:06:07 compute-0 jovial_euler[222570]: 167 167
Dec  3 18:06:07 compute-0 systemd[1]: libpod-99d843ab77a8336d04b954de0cb7c536fed7ab4d2c7a0b955105fee0bf2e558b.scope: Deactivated successfully.
Dec  3 18:06:07 compute-0 podman[222555]: 2025-12-03 18:06:07.731860643 +0000 UTC m=+0.180479452 container died 99d843ab77a8336d04b954de0cb7c536fed7ab4d2c7a0b955105fee0bf2e558b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1f866fdd037b375ac0a56458ecf0b38475a4366ba8535fe99121bff235622ca-merged.mount: Deactivated successfully.
Dec  3 18:06:07 compute-0 podman[222555]: 2025-12-03 18:06:07.785929477 +0000 UTC m=+0.234548286 container remove 99d843ab77a8336d04b954de0cb7c536fed7ab4d2c7a0b955105fee0bf2e558b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:06:07 compute-0 systemd[1]: libpod-conmon-99d843ab77a8336d04b954de0cb7c536fed7ab4d2c7a0b955105fee0bf2e558b.scope: Deactivated successfully.
Dec  3 18:06:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 220 B/s rd, 2.6 KiB/s wr, 13 op/s
Dec  3 18:06:07 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec  3 18:06:07 compute-0 podman[222593]: 2025-12-03 18:06:07.962327658 +0000 UTC m=+0.057166902 container create 969e833ccaea08a5d81b7d8d8e8f40577ec74c85d77269d51e70b5e4aec92f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:07 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec  3 18:06:08 compute-0 systemd[1]: Started libpod-conmon-969e833ccaea08a5d81b7d8d8e8f40577ec74c85d77269d51e70b5e4aec92f7c.scope.
Dec  3 18:06:08 compute-0 podman[222593]: 2025-12-03 18:06:07.940114554 +0000 UTC m=+0.034953778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f00d2c8735c813a38efd21383793ee2b5b9b78dffd994e2f5f2f82875153990/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f00d2c8735c813a38efd21383793ee2b5b9b78dffd994e2f5f2f82875153990/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f00d2c8735c813a38efd21383793ee2b5b9b78dffd994e2f5f2f82875153990/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f00d2c8735c813a38efd21383793ee2b5b9b78dffd994e2f5f2f82875153990/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:08 compute-0 podman[222593]: 2025-12-03 18:06:08.082521782 +0000 UTC m=+0.177361066 container init 969e833ccaea08a5d81b7d8d8e8f40577ec74c85d77269d51e70b5e4aec92f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 18:06:08 compute-0 podman[222593]: 2025-12-03 18:06:08.099265002 +0000 UTC m=+0.194104216 container start 969e833ccaea08a5d81b7d8d8e8f40577ec74c85d77269d51e70b5e4aec92f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lewin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:06:08 compute-0 podman[222593]: 2025-12-03 18:06:08.104054039 +0000 UTC m=+0.198893283 container attach 969e833ccaea08a5d81b7d8d8e8f40577ec74c85d77269d51e70b5e4aec92f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lewin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:08 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec  3 18:06:08 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec  3 18:06:09 compute-0 distracted_lewin[222609]: {
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "osd_id": 1,
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "type": "bluestore"
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:    },
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "osd_id": 2,
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "type": "bluestore"
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:    },
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "osd_id": 0,
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:        "type": "bluestore"
Dec  3 18:06:09 compute-0 distracted_lewin[222609]:    }
Dec  3 18:06:09 compute-0 distracted_lewin[222609]: }
Dec  3 18:06:09 compute-0 systemd[1]: libpod-969e833ccaea08a5d81b7d8d8e8f40577ec74c85d77269d51e70b5e4aec92f7c.scope: Deactivated successfully.
Dec  3 18:06:09 compute-0 systemd[1]: libpod-969e833ccaea08a5d81b7d8d8e8f40577ec74c85d77269d51e70b5e4aec92f7c.scope: Consumed 1.149s CPU time.
Dec  3 18:06:09 compute-0 ceph-mgr[193091]: [progress INFO root] Writing back 13 completed events
Dec  3 18:06:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 18:06:09 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:09 compute-0 podman[222642]: 2025-12-03 18:06:09.308215745 +0000 UTC m=+0.034649889 container died 969e833ccaea08a5d81b7d8d8e8f40577ec74c85d77269d51e70b5e4aec92f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lewin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f00d2c8735c813a38efd21383793ee2b5b9b78dffd994e2f5f2f82875153990-merged.mount: Deactivated successfully.
Dec  3 18:06:09 compute-0 podman[222642]: 2025-12-03 18:06:09.408936422 +0000 UTC m=+0.135370546 container remove 969e833ccaea08a5d81b7d8d8e8f40577ec74c85d77269d51e70b5e4aec92f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 18:06:09 compute-0 systemd[1]: libpod-conmon-969e833ccaea08a5d81b7d8d8e8f40577ec74c85d77269d51e70b5e4aec92f7c.scope: Deactivated successfully.
Dec  3 18:06:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:06:09 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:06:09 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:09 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev a42bf125-c3ae-4627-9b59-cb8599046864 does not exist
Dec  3 18:06:09 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4397ecad-3726-4d70-939a-e319a5a92c47 does not exist
Dec  3 18:06:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 2.0 KiB/s wr, 9 op/s
Dec  3 18:06:10 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:10 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:10 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:10 compute-0 podman[222874]: 2025-12-03 18:06:10.816234793 +0000 UTC m=+0.092910707 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:06:10 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec  3 18:06:10 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec  3 18:06:10 compute-0 podman[222874]: 2025-12-03 18:06:10.968079323 +0000 UTC m=+0.244755237 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 18:06:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 146 op/s
Dec  3 18:06:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:11 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec  3 18:06:11 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec  3 18:06:12 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Dec  3 18:06:12 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Dec  3 18:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 11944b22-691b-40b2-87fb-69ec63175d95 does not exist
Dec  3 18:06:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 12bcf989-9c35-4117-ab6b-5727c2cc08a5 does not exist
Dec  3 18:06:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 89946337-5d5b-4058-9886-8c0a2d9b3c02 does not exist
Dec  3 18:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:06:12 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:12 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:12 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:06:12 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:12 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:06:12 compute-0 python3[223108]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:06:12 compute-0 podman[223146]: 2025-12-03 18:06:12.631995959 +0000 UTC m=+0.053813318 container create f7867067e7a7456d6c8cbf8105cf7a4eafc5486ff46263b9a4f2909e34b3cddb (image=quay.io/ceph/ceph:v18, name=recursing_newton, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:06:12 compute-0 systemd[1]: Started libpod-conmon-f7867067e7a7456d6c8cbf8105cf7a4eafc5486ff46263b9a4f2909e34b3cddb.scope.
Dec  3 18:06:12 compute-0 podman[223146]: 2025-12-03 18:06:12.613674361 +0000 UTC m=+0.035491740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:06:12 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ca1213260e76e27d6d2f4186d203160166e4d52325622b71cfc34e4d15181f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ca1213260e76e27d6d2f4186d203160166e4d52325622b71cfc34e4d15181f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:12 compute-0 podman[223146]: 2025-12-03 18:06:12.768121894 +0000 UTC m=+0.189939333 container init f7867067e7a7456d6c8cbf8105cf7a4eafc5486ff46263b9a4f2909e34b3cddb (image=quay.io/ceph/ceph:v18, name=recursing_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:06:12 compute-0 podman[223146]: 2025-12-03 18:06:12.78880975 +0000 UTC m=+0.210627129 container start f7867067e7a7456d6c8cbf8105cf7a4eafc5486ff46263b9a4f2909e34b3cddb (image=quay.io/ceph/ceph:v18, name=recursing_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:06:12 compute-0 podman[223146]: 2025-12-03 18:06:12.79490373 +0000 UTC m=+0.216721139 container attach f7867067e7a7456d6c8cbf8105cf7a4eafc5486ff46263b9a4f2909e34b3cddb (image=quay.io/ceph/ceph:v18, name=recursing_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 18:06:12 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec  3 18:06:12 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec  3 18:06:13 compute-0 recursing_newton[223163]: could not fetch user info: no user info saved
Dec  3 18:06:13 compute-0 podman[223268]: 2025-12-03 18:06:13.037778149 +0000 UTC m=+0.072049915 container create 85fd2ae078bcf06f521baaa070ec6f876f130623e941ba2b9996aa697fb8db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec  3 18:06:13 compute-0 systemd[1]: Started libpod-conmon-85fd2ae078bcf06f521baaa070ec6f876f130623e941ba2b9996aa697fb8db70.scope.
Dec  3 18:06:13 compute-0 systemd[1]: libpod-f7867067e7a7456d6c8cbf8105cf7a4eafc5486ff46263b9a4f2909e34b3cddb.scope: Deactivated successfully.
Dec  3 18:06:13 compute-0 podman[223146]: 2025-12-03 18:06:13.095792731 +0000 UTC m=+0.517610100 container died f7867067e7a7456d6c8cbf8105cf7a4eafc5486ff46263b9a4f2909e34b3cddb (image=quay.io/ceph/ceph:v18, name=recursing_newton, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:06:13 compute-0 podman[223268]: 2025-12-03 18:06:13.008308387 +0000 UTC m=+0.042580153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-32ca1213260e76e27d6d2f4186d203160166e4d52325622b71cfc34e4d15181f-merged.mount: Deactivated successfully.
Dec  3 18:06:13 compute-0 podman[223146]: 2025-12-03 18:06:13.160942186 +0000 UTC m=+0.582759545 container remove f7867067e7a7456d6c8cbf8105cf7a4eafc5486ff46263b9a4f2909e34b3cddb (image=quay.io/ceph/ceph:v18, name=recursing_newton, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 18:06:13 compute-0 systemd[1]: libpod-conmon-f7867067e7a7456d6c8cbf8105cf7a4eafc5486ff46263b9a4f2909e34b3cddb.scope: Deactivated successfully.
Dec  3 18:06:13 compute-0 podman[223268]: 2025-12-03 18:06:13.178205438 +0000 UTC m=+0.212477204 container init 85fd2ae078bcf06f521baaa070ec6f876f130623e941ba2b9996aa697fb8db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:06:13 compute-0 podman[223268]: 2025-12-03 18:06:13.187816364 +0000 UTC m=+0.222088110 container start 85fd2ae078bcf06f521baaa070ec6f876f130623e941ba2b9996aa697fb8db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:06:13 compute-0 podman[223268]: 2025-12-03 18:06:13.191690929 +0000 UTC m=+0.225962695 container attach 85fd2ae078bcf06f521baaa070ec6f876f130623e941ba2b9996aa697fb8db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:06:13 compute-0 interesting_volhard[223301]: 167 167
Dec  3 18:06:13 compute-0 systemd[1]: libpod-85fd2ae078bcf06f521baaa070ec6f876f130623e941ba2b9996aa697fb8db70.scope: Deactivated successfully.
Dec  3 18:06:13 compute-0 podman[223268]: 2025-12-03 18:06:13.194296683 +0000 UTC m=+0.228568439 container died 85fd2ae078bcf06f521baaa070ec6f876f130623e941ba2b9996aa697fb8db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:06:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-993082180ef2a7e7478e7b95d49f02f59277984f5146a2fe86f36e7402015bfa-merged.mount: Deactivated successfully.
Dec  3 18:06:13 compute-0 podman[223268]: 2025-12-03 18:06:13.253014551 +0000 UTC m=+0.287286297 container remove 85fd2ae078bcf06f521baaa070ec6f876f130623e941ba2b9996aa697fb8db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 18:06:13 compute-0 systemd[1]: libpod-conmon-85fd2ae078bcf06f521baaa070ec6f876f130623e941ba2b9996aa697fb8db70.scope: Deactivated successfully.
Dec  3 18:06:13 compute-0 podman[223365]: 2025-12-03 18:06:13.478058234 +0000 UTC m=+0.071780970 container create eec3760c05ef2afc975351e1dc207ff41b11fa7aaeb1edaf18f4e716c7945434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_benz, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:06:13 compute-0 podman[223365]: 2025-12-03 18:06:13.442288778 +0000 UTC m=+0.036011554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:13 compute-0 python3[223367]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid c1caf3ba-b2a5-5005-a11e-e955c344dccc -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:06:13 compute-0 systemd[1]: Started libpod-conmon-eec3760c05ef2afc975351e1dc207ff41b11fa7aaeb1edaf18f4e716c7945434.scope.
Dec  3 18:06:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d395f1c0568a6c09b19d8c77652ad98f736365c2b3dcb92b1c3146d0e16602f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d395f1c0568a6c09b19d8c77652ad98f736365c2b3dcb92b1c3146d0e16602f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d395f1c0568a6c09b19d8c77652ad98f736365c2b3dcb92b1c3146d0e16602f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d395f1c0568a6c09b19d8c77652ad98f736365c2b3dcb92b1c3146d0e16602f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d395f1c0568a6c09b19d8c77652ad98f736365c2b3dcb92b1c3146d0e16602f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:13 compute-0 podman[223382]: 2025-12-03 18:06:13.6248797 +0000 UTC m=+0.063194219 container create 643e5e39eceb86aa2c6573a801877e0f595e9b82003e34841fbec36d5730563f (image=quay.io/ceph/ceph:v18, name=sharp_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:06:13 compute-0 podman[223365]: 2025-12-03 18:06:13.629838111 +0000 UTC m=+0.223560797 container init eec3760c05ef2afc975351e1dc207ff41b11fa7aaeb1edaf18f4e716c7945434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_benz, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:06:13 compute-0 podman[223365]: 2025-12-03 18:06:13.647975426 +0000 UTC m=+0.241698112 container start eec3760c05ef2afc975351e1dc207ff41b11fa7aaeb1edaf18f4e716c7945434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:06:13 compute-0 podman[223365]: 2025-12-03 18:06:13.654405663 +0000 UTC m=+0.248128389 container attach eec3760c05ef2afc975351e1dc207ff41b11fa7aaeb1edaf18f4e716c7945434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_benz, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 18:06:13 compute-0 systemd[1]: Started libpod-conmon-643e5e39eceb86aa2c6573a801877e0f595e9b82003e34841fbec36d5730563f.scope.
Dec  3 18:06:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd368552357a49c5f62f648dee45c17bbd2083463ba400cb2d139391cc874fe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd368552357a49c5f62f648dee45c17bbd2083463ba400cb2d139391cc874fe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:13 compute-0 podman[223382]: 2025-12-03 18:06:13.600932993 +0000 UTC m=+0.039247532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 18:06:13 compute-0 podman[223382]: 2025-12-03 18:06:13.715421088 +0000 UTC m=+0.153735647 container init 643e5e39eceb86aa2c6573a801877e0f595e9b82003e34841fbec36d5730563f (image=quay.io/ceph/ceph:v18, name=sharp_bouman, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:06:13 compute-0 podman[223382]: 2025-12-03 18:06:13.730813115 +0000 UTC m=+0.169127634 container start 643e5e39eceb86aa2c6573a801877e0f595e9b82003e34841fbec36d5730563f (image=quay.io/ceph/ceph:v18, name=sharp_bouman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:06:13 compute-0 podman[223382]: 2025-12-03 18:06:13.73591157 +0000 UTC m=+0.174226159 container attach 643e5e39eceb86aa2c6573a801877e0f595e9b82003e34841fbec36d5730563f (image=quay.io/ceph/ceph:v18, name=sharp_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:06:13
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.mgr', 'volumes', 'vms', 'default.rgw.log']
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.9 KiB/s wr, 128 op/s
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:06:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:06:14 compute-0 sharp_bouman[223399]: {
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "user_id": "openstack",
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "display_name": "openstack",
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "email": "",
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "suspended": 0,
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "max_buckets": 1000,
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "subusers": [],
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "keys": [
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        {
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:            "user": "openstack",
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:            "access_key": "ZP4ZX57UEUCATQGEE0RT",
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:            "secret_key": "9aYck5qSDGK3biE6cWz5OQr5NFUSMG99slO8kHHF"
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        }
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    ],
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "swift_keys": [],
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "caps": [],
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "op_mask": "read, write, delete",
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "default_placement": "",
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "default_storage_class": "",
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "placement_tags": [],
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "bucket_quota": {
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        "enabled": false,
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        "check_on_raw": false,
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        "max_size": -1,
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        "max_size_kb": 0,
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        "max_objects": -1
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    },
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "user_quota": {
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        "enabled": false,
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        "check_on_raw": false,
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        "max_size": -1,
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        "max_size_kb": 0,
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:        "max_objects": -1
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    },
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "temp_url_keys": [],
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "type": "rgw",
Dec  3 18:06:14 compute-0 sharp_bouman[223399]:    "mfa_ids": []
Dec  3 18:06:14 compute-0 sharp_bouman[223399]: }
Dec  3 18:06:14 compute-0 sharp_bouman[223399]: 
Dec  3 18:06:14 compute-0 systemd[1]: libpod-643e5e39eceb86aa2c6573a801877e0f595e9b82003e34841fbec36d5730563f.scope: Deactivated successfully.
Dec  3 18:06:14 compute-0 conmon[223399]: conmon 643e5e39eceb86aa2c65 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-643e5e39eceb86aa2c6573a801877e0f595e9b82003e34841fbec36d5730563f.scope/container/memory.events
Dec  3 18:06:14 compute-0 podman[223382]: 2025-12-03 18:06:14.143318589 +0000 UTC m=+0.581633138 container died 643e5e39eceb86aa2c6573a801877e0f595e9b82003e34841fbec36d5730563f (image=quay.io/ceph/ceph:v18, name=sharp_bouman, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 18:06:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cd368552357a49c5f62f648dee45c17bbd2083463ba400cb2d139391cc874fe-merged.mount: Deactivated successfully.
Dec  3 18:06:14 compute-0 podman[223382]: 2025-12-03 18:06:14.21685366 +0000 UTC m=+0.655168159 container remove 643e5e39eceb86aa2c6573a801877e0f595e9b82003e34841fbec36d5730563f (image=quay.io/ceph/ceph:v18, name=sharp_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:06:14 compute-0 systemd[1]: libpod-conmon-643e5e39eceb86aa2c6573a801877e0f595e9b82003e34841fbec36d5730563f.scope: Deactivated successfully.
Dec  3 18:06:14 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec  3 18:06:14 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec  3 18:06:14 compute-0 suspicious_benz[223383]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:06:14 compute-0 suspicious_benz[223383]: --> relative data size: 1.0
Dec  3 18:06:14 compute-0 suspicious_benz[223383]: --> All data devices are unavailable
Dec  3 18:06:14 compute-0 systemd[1]: libpod-eec3760c05ef2afc975351e1dc207ff41b11fa7aaeb1edaf18f4e716c7945434.scope: Deactivated successfully.
Dec  3 18:06:14 compute-0 podman[223365]: 2025-12-03 18:06:14.933255408 +0000 UTC m=+1.526978104 container died eec3760c05ef2afc975351e1dc207ff41b11fa7aaeb1edaf18f4e716c7945434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 18:06:14 compute-0 systemd[1]: libpod-eec3760c05ef2afc975351e1dc207ff41b11fa7aaeb1edaf18f4e716c7945434.scope: Consumed 1.203s CPU time.
Dec  3 18:06:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d395f1c0568a6c09b19d8c77652ad98f736365c2b3dcb92b1c3146d0e16602f-merged.mount: Deactivated successfully.
Dec  3 18:06:15 compute-0 podman[223365]: 2025-12-03 18:06:15.015075932 +0000 UTC m=+1.608798618 container remove eec3760c05ef2afc975351e1dc207ff41b11fa7aaeb1edaf18f4e716c7945434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_benz, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:15 compute-0 systemd[1]: libpod-conmon-eec3760c05ef2afc975351e1dc207ff41b11fa7aaeb1edaf18f4e716c7945434.scope: Deactivated successfully.
Dec  3 18:06:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.4 KiB/s wr, 110 op/s
Dec  3 18:06:15 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec  3 18:06:15 compute-0 podman[223673]: 2025-12-03 18:06:15.888732652 +0000 UTC m=+0.045743971 container create 85b8c0340242f2f1ac135c4301bc18797ad89cc8cb70a394717018a22eab08df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 18:06:15 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec  3 18:06:15 compute-0 systemd[1]: Started libpod-conmon-85b8c0340242f2f1ac135c4301bc18797ad89cc8cb70a394717018a22eab08df.scope.
Dec  3 18:06:15 compute-0 podman[223673]: 2025-12-03 18:06:15.871002518 +0000 UTC m=+0.028013857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:16 compute-0 podman[223673]: 2025-12-03 18:06:16.001403072 +0000 UTC m=+0.158414441 container init 85b8c0340242f2f1ac135c4301bc18797ad89cc8cb70a394717018a22eab08df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:16 compute-0 podman[223673]: 2025-12-03 18:06:16.018090101 +0000 UTC m=+0.175101460 container start 85b8c0340242f2f1ac135c4301bc18797ad89cc8cb70a394717018a22eab08df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:06:16 compute-0 nice_haslett[223688]: 167 167
Dec  3 18:06:16 compute-0 systemd[1]: libpod-85b8c0340242f2f1ac135c4301bc18797ad89cc8cb70a394717018a22eab08df.scope: Deactivated successfully.
Dec  3 18:06:16 compute-0 podman[223673]: 2025-12-03 18:06:16.0249655 +0000 UTC m=+0.181976859 container attach 85b8c0340242f2f1ac135c4301bc18797ad89cc8cb70a394717018a22eab08df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:16 compute-0 conmon[223688]: conmon 85b8c0340242f2f1ac13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85b8c0340242f2f1ac135c4301bc18797ad89cc8cb70a394717018a22eab08df.scope/container/memory.events
Dec  3 18:06:16 compute-0 podman[223673]: 2025-12-03 18:06:16.027945393 +0000 UTC m=+0.184956772 container died 85b8c0340242f2f1ac135c4301bc18797ad89cc8cb70a394717018a22eab08df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:06:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-46c593eeaf54f3fb63588f0c1fb7f1b92b3cfbcc24fb783ed97d0ba712d5f439-merged.mount: Deactivated successfully.
Dec  3 18:06:16 compute-0 podman[223673]: 2025-12-03 18:06:16.104702873 +0000 UTC m=+0.261714232 container remove 85b8c0340242f2f1ac135c4301bc18797ad89cc8cb70a394717018a22eab08df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:16 compute-0 systemd[1]: libpod-conmon-85b8c0340242f2f1ac135c4301bc18797ad89cc8cb70a394717018a22eab08df.scope: Deactivated successfully.
Dec  3 18:06:16 compute-0 podman[223716]: 2025-12-03 18:06:16.322823365 +0000 UTC m=+0.064264445 container create 258db36cd66102c6b654bae680ea9dc101e4f4ec386c4347de3572cedbf2a026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:06:16 compute-0 systemd[1]: Started libpod-conmon-258db36cd66102c6b654bae680ea9dc101e4f4ec386c4347de3572cedbf2a026.scope.
Dec  3 18:06:16 compute-0 podman[223716]: 2025-12-03 18:06:16.299163715 +0000 UTC m=+0.040604805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d23ed77f5e60d15896fef496c5ab2f3f7f57e72f2b5659850908e69f1e55b12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d23ed77f5e60d15896fef496c5ab2f3f7f57e72f2b5659850908e69f1e55b12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d23ed77f5e60d15896fef496c5ab2f3f7f57e72f2b5659850908e69f1e55b12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d23ed77f5e60d15896fef496c5ab2f3f7f57e72f2b5659850908e69f1e55b12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:16 compute-0 podman[223716]: 2025-12-03 18:06:16.44668942 +0000 UTC m=+0.188130590 container init 258db36cd66102c6b654bae680ea9dc101e4f4ec386c4347de3572cedbf2a026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:06:16 compute-0 podman[223716]: 2025-12-03 18:06:16.460881057 +0000 UTC m=+0.202322177 container start 258db36cd66102c6b654bae680ea9dc101e4f4ec386c4347de3572cedbf2a026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 18:06:16 compute-0 podman[223716]: 2025-12-03 18:06:16.467385047 +0000 UTC m=+0.208826157 container attach 258db36cd66102c6b654bae680ea9dc101e4f4ec386c4347de3572cedbf2a026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:06:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]: {
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:    "0": [
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:        {
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "devices": [
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "/dev/loop3"
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            ],
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_name": "ceph_lv0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_size": "21470642176",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "name": "ceph_lv0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "tags": {
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.cluster_name": "ceph",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.crush_device_class": "",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.encrypted": "0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.osd_id": "0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.type": "block",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.vdo": "0"
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            },
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "type": "block",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "vg_name": "ceph_vg0"
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:        }
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:    ],
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:    "1": [
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:        {
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "devices": [
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "/dev/loop4"
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            ],
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_name": "ceph_lv1",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_size": "21470642176",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "name": "ceph_lv1",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "tags": {
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.cluster_name": "ceph",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.crush_device_class": "",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.encrypted": "0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.osd_id": "1",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.type": "block",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.vdo": "0"
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            },
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "type": "block",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "vg_name": "ceph_vg1"
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:        }
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:    ],
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:    "2": [
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:        {
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "devices": [
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "/dev/loop5"
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            ],
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_name": "ceph_lv2",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_size": "21470642176",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "name": "ceph_lv2",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "tags": {
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.cluster_name": "ceph",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.crush_device_class": "",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.encrypted": "0",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.osd_id": "2",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.type": "block",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:                "ceph.vdo": "0"
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            },
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "type": "block",
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:            "vg_name": "ceph_vg2"
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:        }
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]:    ]
Dec  3 18:06:17 compute-0 nervous_vaughan[223732]: }
Dec  3 18:06:17 compute-0 systemd[1]: libpod-258db36cd66102c6b654bae680ea9dc101e4f4ec386c4347de3572cedbf2a026.scope: Deactivated successfully.
Dec  3 18:06:17 compute-0 podman[223716]: 2025-12-03 18:06:17.272913867 +0000 UTC m=+1.014354987 container died 258db36cd66102c6b654bae680ea9dc101e4f4ec386c4347de3572cedbf2a026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d23ed77f5e60d15896fef496c5ab2f3f7f57e72f2b5659850908e69f1e55b12-merged.mount: Deactivated successfully.
Dec  3 18:06:17 compute-0 podman[223716]: 2025-12-03 18:06:17.375838318 +0000 UTC m=+1.117279388 container remove 258db36cd66102c6b654bae680ea9dc101e4f4ec386c4347de3572cedbf2a026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:06:17 compute-0 systemd[1]: libpod-conmon-258db36cd66102c6b654bae680ea9dc101e4f4ec386c4347de3572cedbf2a026.scope: Deactivated successfully.
Dec  3 18:06:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.1 KiB/s wr, 96 op/s
Dec  3 18:06:17 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec  3 18:06:17 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec  3 18:06:18 compute-0 podman[223891]: 2025-12-03 18:06:18.364544747 +0000 UTC m=+0.071696988 container create bc4903157c0e08c89e8a9791bb5c66c0075ed179c6ed589ad125caa0e37b6b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:06:18 compute-0 systemd[1]: Started libpod-conmon-bc4903157c0e08c89e8a9791bb5c66c0075ed179c6ed589ad125caa0e37b6b65.scope.
Dec  3 18:06:18 compute-0 podman[223891]: 2025-12-03 18:06:18.337219887 +0000 UTC m=+0.044372168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:18 compute-0 podman[223891]: 2025-12-03 18:06:18.49741952 +0000 UTC m=+0.204571791 container init bc4903157c0e08c89e8a9791bb5c66c0075ed179c6ed589ad125caa0e37b6b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:06:18 compute-0 podman[223891]: 2025-12-03 18:06:18.506949914 +0000 UTC m=+0.214102165 container start bc4903157c0e08c89e8a9791bb5c66c0075ed179c6ed589ad125caa0e37b6b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:06:18 compute-0 podman[223891]: 2025-12-03 18:06:18.511422663 +0000 UTC m=+0.218574934 container attach bc4903157c0e08c89e8a9791bb5c66c0075ed179c6ed589ad125caa0e37b6b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:06:18 compute-0 optimistic_swirles[223905]: 167 167
Dec  3 18:06:18 compute-0 systemd[1]: libpod-bc4903157c0e08c89e8a9791bb5c66c0075ed179c6ed589ad125caa0e37b6b65.scope: Deactivated successfully.
Dec  3 18:06:18 compute-0 podman[223891]: 2025-12-03 18:06:18.516357185 +0000 UTC m=+0.223509436 container died bc4903157c0e08c89e8a9791bb5c66c0075ed179c6ed589ad125caa0e37b6b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:06:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3540bf960e60bcb740b204422763a3a40d3c06d812c4643e07083c9b9c1bdabf-merged.mount: Deactivated successfully.
Dec  3 18:06:18 compute-0 podman[223891]: 2025-12-03 18:06:18.567897587 +0000 UTC m=+0.275049838 container remove bc4903157c0e08c89e8a9791bb5c66c0075ed179c6ed589ad125caa0e37b6b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:06:18 compute-0 systemd[1]: libpod-conmon-bc4903157c0e08c89e8a9791bb5c66c0075ed179c6ed589ad125caa0e37b6b65.scope: Deactivated successfully.
Dec  3 18:06:18 compute-0 podman[223928]: 2025-12-03 18:06:18.774372144 +0000 UTC m=+0.065761972 container create 9b530bd38f8c966b365c01c23fff6c7a2a9a6a5a439a623400f9866dc2febf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:06:18 compute-0 systemd[1]: Started libpod-conmon-9b530bd38f8c966b365c01c23fff6c7a2a9a6a5a439a623400f9866dc2febf9f.scope.
Dec  3 18:06:18 compute-0 podman[223928]: 2025-12-03 18:06:18.747810334 +0000 UTC m=+0.039200152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:06:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5678b95c949fc91100adb30e8df5e9485824891e5c1d9f5bfd969e88c56f82f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5678b95c949fc91100adb30e8df5e9485824891e5c1d9f5bfd969e88c56f82f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5678b95c949fc91100adb30e8df5e9485824891e5c1d9f5bfd969e88c56f82f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5678b95c949fc91100adb30e8df5e9485824891e5c1d9f5bfd969e88c56f82f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:06:18 compute-0 podman[223928]: 2025-12-03 18:06:18.926189334 +0000 UTC m=+0.217579202 container init 9b530bd38f8c966b365c01c23fff6c7a2a9a6a5a439a623400f9866dc2febf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:06:18 compute-0 podman[223928]: 2025-12-03 18:06:18.94565153 +0000 UTC m=+0.237041358 container start 9b530bd38f8c966b365c01c23fff6c7a2a9a6a5a439a623400f9866dc2febf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:06:18 compute-0 podman[223928]: 2025-12-03 18:06:18.953711827 +0000 UTC m=+0.245101645 container attach 9b530bd38f8c966b365c01c23fff6c7a2a9a6a5a439a623400f9866dc2febf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.0 KiB/s wr, 89 op/s
Dec  3 18:06:19 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec  3 18:06:19 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec  3 18:06:19 compute-0 podman[223972]: 2025-12-03 18:06:19.968777001 +0000 UTC m=+0.115197403 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:06:19 compute-0 podman[223969]: 2025-12-03 18:06:19.979804011 +0000 UTC m=+0.138448262 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, io.openshift.expose-services=)
Dec  3 18:06:19 compute-0 podman[223965]: 2025-12-03 18:06:19.981267577 +0000 UTC m=+0.133371548 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:06:19 compute-0 podman[223977]: 2025-12-03 18:06:19.983400009 +0000 UTC m=+0.109143844 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, distribution-scope=public, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:06:19 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Dec  3 18:06:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 18:06:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]: {
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "osd_id": 1,
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "type": "bluestore"
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:    },
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "osd_id": 2,
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "type": "bluestore"
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:    },
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "osd_id": 0,
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:        "type": "bluestore"
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]:    }
Dec  3 18:06:20 compute-0 fervent_chaplygin[223945]: }
Dec  3 18:06:20 compute-0 podman[223970]: 2025-12-03 18:06:20.039500614 +0000 UTC m=+0.179834477 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:06:20 compute-0 systemd[1]: libpod-9b530bd38f8c966b365c01c23fff6c7a2a9a6a5a439a623400f9866dc2febf9f.scope: Deactivated successfully.
Dec  3 18:06:20 compute-0 systemd[1]: libpod-9b530bd38f8c966b365c01c23fff6c7a2a9a6a5a439a623400f9866dc2febf9f.scope: Consumed 1.074s CPU time.
Dec  3 18:06:20 compute-0 podman[223928]: 2025-12-03 18:06:20.045253344 +0000 UTC m=+1.336643142 container died 9b530bd38f8c966b365c01c23fff6c7a2a9a6a5a439a623400f9866dc2febf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:06:20 compute-0 podman[224061]: 2025-12-03 18:06:20.044873835 +0000 UTC m=+0.071023371 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:06:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5678b95c949fc91100adb30e8df5e9485824891e5c1d9f5bfd969e88c56f82f-merged.mount: Deactivated successfully.
Dec  3 18:06:20 compute-0 podman[223928]: 2025-12-03 18:06:20.102690191 +0000 UTC m=+1.394079989 container remove 9b530bd38f8c966b365c01c23fff6c7a2a9a6a5a439a623400f9866dc2febf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:06:20 compute-0 systemd[1]: libpod-conmon-9b530bd38f8c966b365c01c23fff6c7a2a9a6a5a439a623400f9866dc2febf9f.scope: Deactivated successfully.
Dec  3 18:06:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:06:20 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:06:20 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:20 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 07ab90e8-b191-4445-9bf3-6ab0fdb63716 does not exist
Dec  3 18:06:20 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev a0ab347c-63f6-42ff-9392-8a3145cf01a5 does not exist
Dec  3 18:06:20 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec  3 18:06:20 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec  3 18:06:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec  3 18:06:20 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:06:20 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:20 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:20 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:06:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec  3 18:06:20 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec  3 18:06:20 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev 7395aba7-7423-4d23-9c94-30b07a004277 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  3 18:06:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 18:06:20 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:06:21 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.14 deep-scrub starts
Dec  3 18:06:21 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.14 deep-scrub ok
Dec  3 18:06:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec  3 18:06:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:06:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec  3 18:06:21 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec  3 18:06:21 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev e461d117-f1a1-4c73-82b8-7074d8ae804d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  3 18:06:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 18:06:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:06:21 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:06:21 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:06:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Dec  3 18:06:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 18:06:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 18:06:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:21 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec  3 18:06:21 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec  3 18:06:22 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.16 deep-scrub starts
Dec  3 18:06:22 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.16 deep-scrub ok
Dec  3 18:06:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec  3 18:06:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:06:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:06:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:06:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec  3 18:06:22 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec  3 18:06:22 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev c7cad6d5-170a-4b1c-a5c7-79f46df4bf3f (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  3 18:06:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 18:06:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:06:22 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 54 pg[9.0( v 51'389 (0'0,51'389] local-lis/les=45/46 n=177 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=10.513069153s) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 51'388 mlcod 51'388 active pruub 124.966751099s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:22 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 54 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=43/44 n=4 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=8.506900787s) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 44'3 active pruub 122.961227417s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:22 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 54 pg[8.0( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=8.506900787s) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 0'0 unknown pruub 122.961227417s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:06:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:06:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:06:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:06:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:06:22 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 54 pg[9.0( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=10.513069153s) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 51'388 mlcod 0'0 unknown pruub 124.966751099s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec  3 18:06:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:06:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec  3 18:06:23 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec  3 18:06:23 compute-0 ceph-mgr[193091]: [progress INFO root] update: starting ev 565a445f-ddf9-493e-a410-cca37c9061e6 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  3 18:06:23 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev 7395aba7-7423-4d23-9c94-30b07a004277 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  3 18:06:23 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event 7395aba7-7423-4d23-9c94-30b07a004277 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec  3 18:06:23 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev e461d117-f1a1-4c73-82b8-7074d8ae804d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  3 18:06:23 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event e461d117-f1a1-4c73-82b8-7074d8ae804d (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec  3 18:06:23 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev c7cad6d5-170a-4b1c-a5c7-79f46df4bf3f (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  3 18:06:23 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event c7cad6d5-170a-4b1c-a5c7-79f46df4bf3f (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec  3 18:06:23 compute-0 ceph-mgr[193091]: [progress INFO root] complete: finished ev 565a445f-ddf9-493e-a410-cca37c9061e6 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  3 18:06:23 compute-0 ceph-mgr[193091]: [progress INFO root] Completed event 565a445f-ddf9-493e-a410-cca37c9061e6 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.14( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.15( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.14( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.16( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.15( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.17( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.17( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.16( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.10( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.11( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1( v 44'4 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.2( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.3( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.3( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.2( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.d( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.c( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.8( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.f( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.b( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.9( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.e( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.a( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.8( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.9( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.7( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.6( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.6( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.5( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.7( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.4( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.4( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1a( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.5( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.18( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.19( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.19( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.18( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1e( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1f( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1c( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.13( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1d( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.12( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.12( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.11( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1b( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.10( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.13( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 18:06:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.16( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.14( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.17( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.0( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 51'388 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.2( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.3( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.8( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.a( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.7( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.5( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1a( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.4( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.19( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.13( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.12( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[9.10( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 55 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v141: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:06:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 18:06:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 18:06:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:23 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.12 deep-scrub starts
Dec  3 18:06:23 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.12 deep-scrub ok
Dec  3 18:06:24 compute-0 ceph-mgr[193091]: [progress INFO root] Writing back 17 completed events
Dec  3 18:06:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 18:06:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec  3 18:06:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:06:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:06:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:06:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec  3 18:06:24 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec  3 18:06:24 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 56 pg[10.0( v 51'64 (0'0,51'64] local-lis/les=47/48 n=8 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=10.493165970s) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 51'63 active pruub 120.610099792s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:24 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 56 pg[10.0( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=10.493165970s) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 0'0 unknown pruub 120.610099792s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:24 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec  3 18:06:24 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec  3 18:06:25 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec  3 18:06:25 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec  3 18:06:25 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec  3 18:06:25 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec  3 18:06:25 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec  3 18:06:25 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:06:25 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1e( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1b( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.d( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.b( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.a( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.12( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.11( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.13( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.10( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1d( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1c( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1a( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.19( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.18( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.7( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.6( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.5( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.4( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.8( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.9( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.f( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.c( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.e( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.2( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.3( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.16( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.14( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.15( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.17( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1f( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.d( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.18( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.5( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.9( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.c( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.0( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.3( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.14( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.15( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1c( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 57 pg[10.1d( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v144: 321 pgs: 124 unknown, 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:06:26 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec  3 18:06:26 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 56 pg[11.0( v 51'2 (0'0,51'2] local-lis/les=49/50 n=2 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=10.856311798s) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 51'1 active pruub 129.014770508s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.0( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=10.856311798s) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 0'0 unknown pruub 129.014770508s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.3( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.4( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.17( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.18( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.19( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.1a( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.2( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=49/50 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.13( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.14( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.15( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.16( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.9( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.a( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.7( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.8( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.1f( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.6( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.1b( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.1c( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.5( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.1d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.1e( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.e( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.f( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.c( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.11( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.12( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.10( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 57 pg[11.b( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec  3 18:06:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec  3 18:06:26 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.16( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.13( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/58 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/58 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.0( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.c( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.a( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.5( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.7( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.1d( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 58 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:26 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec  3 18:06:26 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec  3 18:06:27 compute-0 systemd-logind[784]: New session 41 of user zuul.
Dec  3 18:06:27 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec  3 18:06:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v146: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:06:28 compute-0 python3.9[224316]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:06:28 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec  3 18:06:29 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec  3 18:06:29 compute-0 podman[158200]: time="2025-12-03T18:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:06:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:06:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6792 "" "Go-http-client/1.1"
Dec  3 18:06:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v147: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:06:30 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec  3 18:06:30 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec  3 18:06:31 compute-0 python3.9[224542]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:06:31 compute-0 openstack_network_exporter[160319]: ERROR   18:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:06:31 compute-0 openstack_network_exporter[160319]: ERROR   18:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:06:31 compute-0 openstack_network_exporter[160319]: ERROR   18:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:06:31 compute-0 openstack_network_exporter[160319]: ERROR   18:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:06:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:06:31 compute-0 openstack_network_exporter[160319]: ERROR   18:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:06:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:06:31 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec  3 18:06:31 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec  3 18:06:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:06:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 18:06:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 18:06:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Dec  3 18:06:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  3 18:06:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 18:06:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec  3 18:06:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:06:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:06:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  3 18:06:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:06:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec  3 18:06:31 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.784678459s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.133094788s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.784598351s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.133094788s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.d( v 58'65 (0'0,58'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.784351349s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 127.132987976s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.d( v 58'65 (0'0,58'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.784227371s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 127.132987976s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.784066200s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.133041382s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.784045219s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.133041382s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.784187317s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.133178711s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.784128189s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.133178711s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791540146s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.140693665s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791522980s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.140693665s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791370392s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.140617371s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791349411s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.140617371s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791568756s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.140945435s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791554451s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.140945435s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791540146s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.140968323s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791521072s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.140968323s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791456223s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.141029358s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791440964s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.141029358s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790830612s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.140586853s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791258812s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.141029358s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790812492s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.140586853s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791235924s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.141029358s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791110039s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.141059875s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.791094780s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.141059875s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790997505s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.141151428s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790979385s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.141151428s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790753365s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.141090393s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.9( v 58'65 (0'0,58'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790696144s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 127.141059875s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790729523s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.141090393s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.9( v 58'65 (0'0,58'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790663719s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 127.141059875s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.e( v 58'65 (0'0,58'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790658951s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 127.141250610s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.e( v 58'65 (0'0,58'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790615082s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 127.141250610s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790542603s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.141227722s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790525436s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.141227722s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790403366s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.141250610s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790388107s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.141250610s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.14( v 58'65 (0'0,58'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790319443s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 127.141288757s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.14( v 58'65 (0'0,58'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790300369s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 127.141288757s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.15( v 58'65 (0'0,58'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790223122s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 127.141304016s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.15( v 58'65 (0'0,58'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790204048s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 127.141304016s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790169716s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.141319275s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790118217s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 127.141342163s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790114403s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.141319275s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=9.790102959s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.141342163s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[10.9( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[10.4( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[10.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[10.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[10.1( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[10.10( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[10.11( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[10.b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[10.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[10.12( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.797781944s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.562927246s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.797695160s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.562927246s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.702232361s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.467605591s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.701779366s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.467605591s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[11.17( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[8.15( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.15( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.701043129s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.467544556s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.701012611s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.467544556s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.700944901s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.467666626s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.700917244s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.467666626s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.803165436s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570083618s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.803138733s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570083618s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.700536728s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.467666626s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.700510025s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.467666626s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.802146912s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570144653s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.802121162s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570144653s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.699078560s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.467651367s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.698991776s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.467651367s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/58 n=1 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.800395966s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570175171s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/58 n=1 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.800306320s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570175171s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.696998596s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.467681885s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.2( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.696924210s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.467681885s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/58 n=1 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.789646149s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570327759s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/58 n=1 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.789602280s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570327759s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.694416046s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.475372314s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.694350243s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.475372314s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.789126396s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570404053s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.789091110s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570404053s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.692017555s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.473541260s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.686176300s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.467697144s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.691952705s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.473495483s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.691919327s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473495483s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.691904068s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473541260s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685964584s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.467697144s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.788655281s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570449829s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[8.2( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.788614273s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570449829s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.691658974s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.473541260s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.691617966s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473541260s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.691483498s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.473571777s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.788412094s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570541382s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.788385391s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570541382s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.788283348s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570465088s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.788159370s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570465088s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.691017151s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473571777s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[8.d( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.d( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.b( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.689795494s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.473648071s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.689764023s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473648071s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.786478043s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570495605s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.786454201s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570495605s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.689304352s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.473587036s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.689278603s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473587036s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.689083099s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.473602295s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.689055443s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473602295s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.785840034s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570571899s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.785816193s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570571899s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.688707352s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.473602295s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.688685417s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473602295s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.688529968s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.473693848s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.688450813s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473693848s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.785207748s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570617676s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.785177231s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570617676s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.687989235s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.473678589s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.687957764s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473678589s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.784506798s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570617676s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.c( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[11.e( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.9( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.8( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.3( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.686215401s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.473709106s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.686134338s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473709106s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.782879829s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570632935s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.686728477s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.474487305s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.782850266s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570632935s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.686672211s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.474487305s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.784473419s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570617676s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685750008s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.473876953s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685722351s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473876953s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.686282158s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.474578857s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.782306671s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570663452s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.686249733s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.474578857s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.782278061s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570663452s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.782172203s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570724487s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.782147408s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570724487s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.686049461s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.474655151s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.686017036s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.474655151s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.781702995s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570739746s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.781674385s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570739746s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685386658s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.474655151s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685359001s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.474655151s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685277939s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.474655151s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.781343460s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570770264s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685235023s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.474655151s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.781302452s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570770264s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685083389s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.474685669s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685057640s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.474685669s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.684932709s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.474700928s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.684908867s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.474700928s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.780808449s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570816040s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.780773163s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570816040s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.684215546s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.473571777s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.683137894s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.473571777s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685791016s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.476470947s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685533524s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.476470947s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685420990s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.476470947s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685995102s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.477096558s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685394287s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.476470947s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685972214s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.477096558s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.779592514s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570892334s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.779459000s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570816040s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.779553413s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570892334s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.779432297s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570816040s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.779240608s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570846558s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685263634s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.476943970s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.779191017s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570846558s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685227394s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.476943970s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685137749s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.477035522s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.685106277s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.477035522s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.778711319s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570861816s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.778630257s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570861816s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.684641838s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.476974487s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.684614182s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.476974487s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.684518814s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 139.477005005s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.778359413s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 134.570861816s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.684497833s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.477005005s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/58 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.778324127s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.570861816s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.18( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.f( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.b( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.9( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[8.1c( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.11( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.680522919s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 139.477020264s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 59 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=15.680466652s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.477020264s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[8.11( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 59 pg[11.1f( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.6( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[11.6( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.1f( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.e( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.1d( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[11.10( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[11.19( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 59 pg[8.1a( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  3 18:06:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:06:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:06:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:06:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  3 18:06:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:06:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec  3 18:06:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec  3 18:06:32 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=59/60 n=1 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=59/60 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.3( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=59/60 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.9( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[10.14( v 58'65 lc 51'54 (0'0,58'65] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=58'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 60 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[8.1c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=44'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 60 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [2] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[10.9( v 58'65 lc 51'56 (0'0,58'65] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=58'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[10.d( v 58'65 lc 51'50 (0'0,58'65] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=58'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[10.e( v 58'65 lc 51'48 (0'0,58'65] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=58'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=59/60 n=1 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=56/49 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[10.15( v 58'65 lc 51'46 (0'0,58'65] local-lis/les=59/60 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=58'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 60 pg[8.f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=59/60 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=44'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 23 peering, 298 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:06:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec  3 18:06:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec  3 18:06:33 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:33 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 61 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[54,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:34 compute-0 podman[224562]: 2025-12-03 18:06:34.915791222 +0000 UTC m=+0.088888798 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:06:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec  3 18:06:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec  3 18:06:34 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.998260498s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.773178101s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.998186111s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.773178101s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.998112679s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.773284912s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.998016357s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.773284912s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.995538712s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.770950317s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.995498657s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.770950317s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997687340s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.773269653s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997631073s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.773269653s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997692108s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.773513794s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997484207s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.773391724s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997435570s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.773391724s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997289658s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.773437500s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997178078s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.773437500s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997139931s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.773437500s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997065544s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.773437500s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997603416s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.773513794s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997568130s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.774276733s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.997515678s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.774276733s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.996658325s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.773513794s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.996567726s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.773452759s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.996613503s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.773513794s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.996520996s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.773452759s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.996158600s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.773574829s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.996686935s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.774124146s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.996124268s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.773574829s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.996702194s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.774169922s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.996627808s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.774124146s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 62 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62 pruub=14.996670723s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.774169922s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:34 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:34 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:34 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:34 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:35 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 62 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:35 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Dec  3 18:06:35 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Dec  3 18:06:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v154: 321 pgs: 23 peering, 298 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Dec  3 18:06:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec  3 18:06:35 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Dec  3 18:06:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec  3 18:06:35 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Dec  3 18:06:36 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec  3 18:06:36 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 63 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=63 pruub=13.968683243s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.778335571s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:36 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 63 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=63 pruub=13.968599319s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.778335571s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:36 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 63 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=63 pruub=13.965738297s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 141.775573730s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:36 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 63 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=63 pruub=13.965644836s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.775573730s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 63 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:36 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec  3 18:06:36 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec  3 18:06:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec  3 18:06:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec  3 18:06:37 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec  3 18:06:37 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 64 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:37 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 64 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=54/45 lis/c=60/54 les/c/f=61/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v157: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 222 B/s, 2 objects/s recovering
Dec  3 18:06:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Dec  3 18:06:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  3 18:06:37 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec  3 18:06:37 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec  3 18:06:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec  3 18:06:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  3 18:06:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec  3 18:06:38 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec  3 18:06:38 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  3 18:06:38 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec  3 18:06:38 compute-0 systemd[1]: session-41.scope: Consumed 9.641s CPU time.
Dec  3 18:06:38 compute-0 systemd-logind[784]: Session 41 logged out. Waiting for processes to exit.
Dec  3 18:06:38 compute-0 systemd-logind[784]: Removed session 41.
Dec  3 18:06:39 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  3 18:06:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 626 B/s, 18 objects/s recovering
Dec  3 18:06:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Dec  3 18:06:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  3 18:06:39 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec  3 18:06:39 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec  3 18:06:39 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts
Dec  3 18:06:40 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok
Dec  3 18:06:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec  3 18:06:40 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  3 18:06:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec  3 18:06:40 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec  3 18:06:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  3 18:06:40 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec  3 18:06:40 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec  3 18:06:41 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  3 18:06:41 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec  3 18:06:41 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec  3 18:06:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v161: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 521 B/s, 15 objects/s recovering
Dec  3 18:06:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Dec  3 18:06:41 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  3 18:06:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec  3 18:06:42 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  3 18:06:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec  3 18:06:42 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec  3 18:06:42 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  3 18:06:42 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec  3 18:06:42 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec  3 18:06:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  3 18:06:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v163: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 373 B/s, 14 objects/s recovering
Dec  3 18:06:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Dec  3 18:06:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  3 18:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:06:43 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  3 18:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:06:43 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  3 18:06:43 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec  3 18:06:43 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec  3 18:06:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec  3 18:06:44 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  3 18:06:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec  3 18:06:44 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec  3 18:06:44 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  3 18:06:44 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec  3 18:06:44 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec  3 18:06:45 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  3 18:06:45 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec  3 18:06:45 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec  3 18:06:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v165: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:06:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Dec  3 18:06:45 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  3 18:06:45 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec  3 18:06:45 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec  3 18:06:45 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec  3 18:06:45 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec  3 18:06:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec  3 18:06:46 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  3 18:06:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec  3 18:06:46 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec  3 18:06:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  3 18:06:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:46 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Dec  3 18:06:46 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Dec  3 18:06:46 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec  3 18:06:46 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec  3 18:06:47 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  3 18:06:47 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec  3 18:06:47 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec  3 18:06:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v167: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:06:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Dec  3 18:06:47 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  3 18:06:47 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec  3 18:06:47 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 69 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69 pruub=15.677559853s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 155.468872070s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 69 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69 pruub=15.677458763s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.468872070s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 69 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69 pruub=15.682682037s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 155.474746704s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 69 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69 pruub=15.682643890s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.474746704s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 69 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69 pruub=15.682267189s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 155.474822998s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 69 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69 pruub=15.682225227s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.474822998s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 69 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69 pruub=15.682492256s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 155.475509644s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 69 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69 pruub=15.682465553s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.475509644s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 69 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2] r=0 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 69 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2] r=0 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 69 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2] r=0 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 69 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2] r=0 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec  3 18:06:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  3 18:06:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec  3 18:06:48 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec  3 18:06:48 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  3 18:06:48 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 70 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=11.810425758s) [2] r=-1 lpr=70 pi=[62,70)/1 crt=51'389 mlcod 0'0 active pruub 160.490631104s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 70 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=70 pruub=12.822167397s) [2] r=-1 lpr=70 pi=[63,70)/1 crt=51'389 mlcod 0'0 active pruub 161.502410889s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 70 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=70 pruub=12.822136879s) [2] r=-1 lpr=70 pi=[63,70)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 161.502410889s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 70 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=11.810358047s) [2] r=-1 lpr=70 pi=[62,70)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 160.490631104s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 70 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=11.809233665s) [2] r=-1 lpr=70 pi=[62,70)/1 crt=51'389 mlcod 0'0 active pruub 160.490615845s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 70 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=11.809117317s) [2] r=-1 lpr=70 pi=[62,70)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 160.490615845s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 70 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=11.808784485s) [2] r=-1 lpr=70 pi=[62,70)/1 crt=51'389 mlcod 0'0 active pruub 160.490783691s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=70) [2] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 70 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=11.808737755s) [2] r=-1 lpr=70 pi=[62,70)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 160.490783691s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=70) [2] r=0 lpr=70 pi=[63,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=70) [2] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 70 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=70) [2] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 70 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 70 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 70 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 70 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 70 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 70 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 70 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:48 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 70 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:48 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec  3 18:06:48 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec  3 18:06:48 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec  3 18:06:48 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec  3 18:06:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec  3 18:06:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec  3 18:06:49 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec  3 18:06:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  3 18:06:49 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 71 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:49 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 71 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:49 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 71 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:49 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 71 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:49 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 71 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:49 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 71 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:49 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 71 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[63,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:49 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 71 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[63,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:49 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 71 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=0 lpr=71 pi=[62,71)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:49 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 71 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=71) [2]/[0] r=0 lpr=71 pi=[63,71)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:49 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 71 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=0 lpr=71 pi=[62,71)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:49 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 71 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=0 lpr=71 pi=[62,71)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:49 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 71 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=71) [2]/[0] r=0 lpr=71 pi=[63,71)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:49 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 71 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=0 lpr=71 pi=[62,71)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:49 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 71 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=0 lpr=71 pi=[62,71)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:49 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 71 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] r=0 lpr=71 pi=[62,71)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:49 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 71 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=70/71 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:49 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 71 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=70/71 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:49 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 71 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=70/71 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:49 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 71 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=70/71 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[54,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:49 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec  3 18:06:49 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec  3 18:06:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 4 remapped+peering, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:06:49 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec  3 18:06:49 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec  3 18:06:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec  3 18:06:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec  3 18:06:50 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec  3 18:06:50 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 72 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72 pruub=15.003287315s) [2] async=[2] r=-1 lpr=72 pi=[54,72)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 157.062591553s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:50 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 72 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72 pruub=15.003129005s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.062591553s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:50 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 72 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72 pruub=15.002741814s) [2] async=[2] r=-1 lpr=72 pi=[54,72)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 157.062728882s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:50 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 72 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72 pruub=15.002677917s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.062728882s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:50 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 72 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72 pruub=15.002356529s) [2] async=[2] r=-1 lpr=72 pi=[54,72)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 157.062652588s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:50 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 72 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72 pruub=15.002267838s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.062652588s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:50 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 72 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72 pruub=14.996127129s) [2] async=[2] r=-1 lpr=72 pi=[54,72)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 157.056869507s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:50 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 72 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72 pruub=14.996011734s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.056869507s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:50 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 72 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:50 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 72 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:50 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 72 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:50 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 72 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:50 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 72 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:50 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 72 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:50 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 72 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:50 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 72 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:50 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 72 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[62,71)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:50 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 72 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[62,71)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:50 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 72 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[63,71)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:50 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 72 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[62,71)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:50 compute-0 podman[224638]: 2025-12-03 18:06:50.97271177 +0000 UTC m=+0.100745669 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, version=9.4, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 18:06:50 compute-0 podman[224621]: 2025-12-03 18:06:50.984359265 +0000 UTC m=+0.146165311 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 18:06:50 compute-0 podman[224629]: 2025-12-03 18:06:50.988463026 +0000 UTC m=+0.116576237 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4)
Dec  3 18:06:51 compute-0 podman[224623]: 2025-12-03 18:06:51.00171074 +0000 UTC m=+0.141500897 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:06:51 compute-0 podman[224630]: 2025-12-03 18:06:51.003966225 +0000 UTC m=+0.126085549 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:06:51 compute-0 podman[224622]: 2025-12-03 18:06:51.008002774 +0000 UTC m=+0.146319605 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 18:06:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec  3 18:06:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec  3 18:06:51 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=71/63 les/c/f=72/64/0 sis=73) [2] r=0 lpr=73 pi=[63,73)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=71/63 les/c/f=72/64/0 sis=73) [2] r=0 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73) [2] r=0 lpr=73 pi=[62,73)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73) [2] r=0 lpr=73 pi=[62,73)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73) [2] r=0 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73) [2] r=0 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73) [2] r=0 lpr=73 pi=[62,73)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73) [2] r=0 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:51 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 73 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73 pruub=15.004814148s) [2] async=[2] r=-1 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 51'389 active pruub 166.764587402s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:51 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 73 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=54/45 lis/c=71/63 les/c/f=72/64/0 sis=73 pruub=15.002381325s) [2] async=[2] r=-1 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 51'389 active pruub 166.762161255s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:51 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 73 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73 pruub=15.004748344s) [2] r=-1 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 166.764587402s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:51 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 73 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=54/45 lis/c=71/63 les/c/f=72/64/0 sis=73 pruub=15.002309799s) [2] r=-1 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 166.762161255s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:51 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 73 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73 pruub=15.001922607s) [2] async=[2] r=-1 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 51'389 active pruub 166.762008667s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:51 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 73 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73 pruub=15.001875877s) [2] r=-1 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 166.762008667s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:51 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 73 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73 pruub=15.001715660s) [2] async=[2] r=-1 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 51'389 active pruub 166.761871338s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:51 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 73 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73 pruub=15.001668930s) [2] r=-1 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 166.761871338s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=72/73 n=6 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=72/73 n=5 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=72/73 n=6 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:51 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 73 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=72/73 n=5 ec=54/45 lis/c=70/54 les/c/f=71/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:51 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec  3 18:06:51 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec  3 18:06:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v173: 321 pgs: 4 remapped+peering, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:06:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec  3 18:06:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec  3 18:06:52 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec  3 18:06:52 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 74 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=54/45 lis/c=71/63 les/c/f=72/64/0 sis=73) [2] r=0 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:52 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 74 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=73/74 n=6 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73) [2] r=0 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:52 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 74 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73) [2] r=0 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:52 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 74 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=6 ec=54/45 lis/c=71/62 les/c/f=72/63/0 sis=73) [2] r=0 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v175: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 167 B/s, 8 objects/s recovering
Dec  3 18:06:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Dec  3 18:06:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  3 18:06:53 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Dec  3 18:06:53 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Dec  3 18:06:54 compute-0 systemd-logind[784]: New session 42 of user zuul.
Dec  3 18:06:54 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec  3 18:06:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec  3 18:06:54 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  3 18:06:54 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  3 18:06:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec  3 18:06:54 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec  3 18:06:54 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 75 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75 pruub=9.310611725s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 155.474746704s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:54 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 75 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75 pruub=9.310567856s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.474746704s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:54 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 75 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75 pruub=9.310557365s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 155.475296021s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:54 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 75 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75 pruub=9.310462952s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.475296021s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:54 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2] r=0 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:54 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2] r=0 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:54 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.6 deep-scrub starts
Dec  3 18:06:54 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.6 deep-scrub ok
Dec  3 18:06:54 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.13 deep-scrub starts
Dec  3 18:06:54 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.13 deep-scrub ok
Dec  3 18:06:55 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec  3 18:06:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  3 18:06:55 compute-0 python3.9[224893]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  3 18:06:55 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec  3 18:06:55 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec  3 18:06:55 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[54,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:55 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[54,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:55 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[54,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:55 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[54,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:55 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 76 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[1] r=0 lpr=76 pi=[54,76)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:55 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 76 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[1] r=0 lpr=76 pi=[54,76)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:55 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 76 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[1] r=0 lpr=76 pi=[54,76)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:55 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 76 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[1] r=0 lpr=76 pi=[54,76)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 169 B/s, 8 objects/s recovering
Dec  3 18:06:55 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Dec  3 18:06:55 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  3 18:06:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec  3 18:06:56 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  3 18:06:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec  3 18:06:56 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec  3 18:06:56 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  3 18:06:56 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 77 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=76/77 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[54,76)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:56 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 77 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=76/77 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[54,76)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:56 compute-0 python3.9[225067]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:06:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:06:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec  3 18:06:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec  3 18:06:56 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec  3 18:06:56 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 78 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=78) [2] r=0 lpr=78 pi=[54,78)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:56 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 78 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=78) [2] r=0 lpr=78 pi=[54,78)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:57 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 78 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=76/77 n=5 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=78 pruub=15.574701309s) [2] async=[2] r=-1 lpr=78 pi=[54,78)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 164.375549316s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:57 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 78 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=76/77 n=5 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=78 pruub=15.574609756s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.375549316s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  3 18:06:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 1 peering, 1 remapped+peering, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec  3 18:06:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec  3 18:06:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec  3 18:06:57 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec  3 18:06:57 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 79 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=76/77 n=6 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=79 pruub=14.696228981s) [2] async=[2] r=-1 lpr=79 pi=[54,79)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 164.380981445s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:57 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 79 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=76/77 n=6 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=79 pruub=14.696031570s) [2] r=-1 lpr=79 pi=[54,79)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.380981445s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:06:57 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 79 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=79) [2] r=0 lpr=79 pi=[54,79)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:06:57 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 79 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=79) [2] r=0 lpr=79 pi=[54,79)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:06:57 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 79 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=78/79 n=5 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=78) [2] r=0 lpr=78 pi=[54,78)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:58 compute-0 python3.9[225223]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:06:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec  3 18:06:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec  3 18:06:58 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec  3 18:06:58 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 80 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=79/80 n=6 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=79) [2] r=0 lpr=79 pi=[54,79)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:06:59 compute-0 podman[158200]: time="2025-12-03T18:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:06:59 compute-0 python3.9[225376]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:06:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:06:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6785 "" "Go-http-client/1.1"
Dec  3 18:06:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 1 peering, 1 remapped+peering, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec  3 18:06:59 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec  3 18:06:59 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec  3 18:07:00 compute-0 python3.9[225530]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:07:01 compute-0 openstack_network_exporter[160319]: ERROR   18:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:07:01 compute-0 openstack_network_exporter[160319]: ERROR   18:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:07:01 compute-0 openstack_network_exporter[160319]: ERROR   18:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:07:01 compute-0 openstack_network_exporter[160319]: ERROR   18:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:07:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:07:01 compute-0 openstack_network_exporter[160319]: ERROR   18:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:07:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:07:01 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec  3 18:07:01 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec  3 18:07:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 1 peering, 1 remapped+peering, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Dec  3 18:07:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:01 compute-0 python3.9[225682]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:07:02 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec  3 18:07:02 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec  3 18:07:03 compute-0 python3.9[225832]: ansible-ansible.builtin.service_facts Invoked
Dec  3 18:07:03 compute-0 network[225849]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 18:07:03 compute-0 network[225850]: 'network-scripts' will be removed from distribution in near future.
Dec  3 18:07:03 compute-0 network[225851]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.698 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.699 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f55bc8ef0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.719 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.719 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.720 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.720 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:07:03.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:07:03 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec  3 18:07:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 31 B/s, 1 objects/s recovering
Dec  3 18:07:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Dec  3 18:07:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  3 18:07:03 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec  3 18:07:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec  3 18:07:04 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  3 18:07:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec  3 18:07:04 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec  3 18:07:04 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  3 18:07:04 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec  3 18:07:04 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec  3 18:07:04 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec  3 18:07:04 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec  3 18:07:05 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec  3 18:07:05 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec  3 18:07:05 compute-0 podman[225888]: 2025-12-03 18:07:05.098158361 +0000 UTC m=+0.103201033 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:07:05 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  3 18:07:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Dec  3 18:07:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Dec  3 18:07:05 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  3 18:07:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec  3 18:07:06 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  3 18:07:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec  3 18:07:06 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec  3 18:07:06 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  3 18:07:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:06 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec  3 18:07:06 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec  3 18:07:07 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec  3 18:07:07 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec  3 18:07:07 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  3 18:07:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Dec  3 18:07:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Dec  3 18:07:07 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  3 18:07:08 compute-0 python3.9[226146]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:07:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec  3 18:07:08 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  3 18:07:08 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  3 18:07:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec  3 18:07:08 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec  3 18:07:08 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec  3 18:07:08 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec  3 18:07:08 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec  3 18:07:09 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec  3 18:07:09 compute-0 python3.9[226296]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:07:09 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec  3 18:07:09 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec  3 18:07:09 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  3 18:07:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v192: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Dec  3 18:07:09 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  3 18:07:09 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 83 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=9.737943649s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 171.476974487s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:09 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 83 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=9.737874985s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.476974487s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:09 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 83 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=9.736538887s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 171.475234985s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:09 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 83 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=9.734955788s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.475234985s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:09 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:09 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec  3 18:07:10 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  3 18:07:10 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  3 18:07:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec  3 18:07:10 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec  3 18:07:10 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:10 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:10 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:10 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:10 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 84 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[1] r=0 lpr=84 pi=[54,84)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:10 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 84 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[1] r=0 lpr=84 pi=[54,84)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:10 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 84 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[1] r=0 lpr=84 pi=[54,84)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:10 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 84 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[1] r=0 lpr=84 pi=[54,84)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:11 compute-0 python3.9[226451]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:07:11 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec  3 18:07:11 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec  3 18:07:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec  3 18:07:11 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  3 18:07:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec  3 18:07:11 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec  3 18:07:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:11 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 85 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=84/85 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:11 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 85 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=84/85 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:12 compute-0 python3.9[226609]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 18:07:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec  3 18:07:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec  3 18:07:12 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec  3 18:07:12 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 86 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=84/85 n=6 ec=54/45 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=14.991031647s) [2] async=[2] r=-1 lpr=86 pi=[54,86)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 179.673843384s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:12 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 86 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=84/85 n=6 ec=54/45 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=14.990921021s) [2] r=-1 lpr=86 pi=[54,86)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.673843384s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:12 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 86 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=84/85 n=5 ec=54/45 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=14.995708466s) [2] async=[2] r=-1 lpr=86 pi=[54,86)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 179.679565430s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:12 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 86 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=84/85 n=5 ec=54/45 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=14.995616913s) [2] r=-1 lpr=86 pi=[54,86)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.679565430s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:12 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 86 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:12 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 86 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:12 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 86 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:12 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 86 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=54/45 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:13 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec  3 18:07:13 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec  3 18:07:13 compute-0 python3.9[226693]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:07:13
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Some PGs (0.006231) are inactive; try again later
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v197: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec  3 18:07:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:07:13 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:07:13 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 87 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=86/87 n=5 ec=54/45 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:13 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 87 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=86/87 n=6 ec=54/45 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:07:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:07:14 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec  3 18:07:14 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec  3 18:07:15 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec  3 18:07:15 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec  3 18:07:15 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.9 deep-scrub starts
Dec  3 18:07:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v199: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:15 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.9 deep-scrub ok
Dec  3 18:07:16 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.17 deep-scrub starts
Dec  3 18:07:16 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.17 deep-scrub ok
Dec  3 18:07:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  3 18:07:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Dec  3 18:07:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  3 18:07:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec  3 18:07:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  3 18:07:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec  3 18:07:17 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec  3 18:07:17 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec  3 18:07:17 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  3 18:07:17 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec  3 18:07:18 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts
Dec  3 18:07:18 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok
Dec  3 18:07:18 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec  3 18:07:18 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec  3 18:07:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  3 18:07:19 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec  3 18:07:19 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec  3 18:07:19 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec  3 18:07:19 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec  3 18:07:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Dec  3 18:07:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Dec  3 18:07:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  3 18:07:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec  3 18:07:20 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  3 18:07:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec  3 18:07:20 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec  3 18:07:20 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  3 18:07:20 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec  3 18:07:20 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec  3 18:07:21 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  3 18:07:21 compute-0 podman[226902]: 2025-12-03 18:07:21.426703231 +0000 UTC m=+0.139628119 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:07:21 compute-0 podman[226904]: 2025-12-03 18:07:21.439588454 +0000 UTC m=+0.156205333 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 18:07:21 compute-0 podman[226906]: 2025-12-03 18:07:21.448628824 +0000 UTC m=+0.153795464 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:07:21 compute-0 podman[226905]: 2025-12-03 18:07:21.451164955 +0000 UTC m=+0.149962930 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4)
Dec  3 18:07:21 compute-0 podman[226903]: 2025-12-03 18:07:21.46288766 +0000 UTC m=+0.173044811 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 18:07:21 compute-0 podman[226910]: 2025-12-03 18:07:21.463155028 +0000 UTC m=+0.155439294 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, name=ubi9, version=9.4, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec  3 18:07:21 compute-0 podman[227044]: 2025-12-03 18:07:21.559703677 +0000 UTC m=+0.088643829 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:07:21 compute-0 podman[227044]: 2025-12-03 18:07:21.674329637 +0000 UTC m=+0.203269849 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 18:07:21 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec  3 18:07:21 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec  3 18:07:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Dec  3 18:07:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Dec  3 18:07:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  3 18:07:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec  3 18:07:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  3 18:07:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec  3 18:07:22 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec  3 18:07:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  3 18:07:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:07:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:07:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:07:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:07:23 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Dec  3 18:07:23 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Dec  3 18:07:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  3 18:07:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:07:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 5b0a92d0-5d78-49d7-9540-8cd72f2e6ead does not exist
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6c0f1f31-0034-4514-a9ff-1d72a746644f does not exist
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 8d6388de-1b68-45c2-b7e6-7a533e763f56 does not exist
Dec  3 18:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:07:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Dec  3 18:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  3 18:07:23 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec  3 18:07:23 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec  3 18:07:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:07:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:07:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:07:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  3 18:07:24 compute-0 podman[227464]: 2025-12-03 18:07:24.53786349 +0000 UTC m=+0.077212400 container create 83982d5441e7f0eaa58059f1caa6f066fd682b7af677505677fac6652602844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 18:07:24 compute-0 systemd[1]: Started libpod-conmon-83982d5441e7f0eaa58059f1caa6f066fd682b7af677505677fac6652602844f.scope.
Dec  3 18:07:24 compute-0 podman[227464]: 2025-12-03 18:07:24.505876221 +0000 UTC m=+0.045225211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:07:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec  3 18:07:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  3 18:07:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec  3 18:07:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:07:24 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec  3 18:07:24 compute-0 podman[227464]: 2025-12-03 18:07:24.676700148 +0000 UTC m=+0.216049078 container init 83982d5441e7f0eaa58059f1caa6f066fd682b7af677505677fac6652602844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:07:24 compute-0 podman[227464]: 2025-12-03 18:07:24.702878396 +0000 UTC m=+0.242227296 container start 83982d5441e7f0eaa58059f1caa6f066fd682b7af677505677fac6652602844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:07:24 compute-0 podman[227464]: 2025-12-03 18:07:24.707686642 +0000 UTC m=+0.247035552 container attach 83982d5441e7f0eaa58059f1caa6f066fd682b7af677505677fac6652602844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:07:24 compute-0 reverent_joliot[227480]: 167 167
Dec  3 18:07:24 compute-0 systemd[1]: libpod-83982d5441e7f0eaa58059f1caa6f066fd682b7af677505677fac6652602844f.scope: Deactivated successfully.
Dec  3 18:07:24 compute-0 podman[227464]: 2025-12-03 18:07:24.716921857 +0000 UTC m=+0.256270797 container died 83982d5441e7f0eaa58059f1caa6f066fd682b7af677505677fac6652602844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:07:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-146346e8e9e44b612ceb97c666878ce5192530246516201902849a45de06b4f0-merged.mount: Deactivated successfully.
Dec  3 18:07:24 compute-0 podman[227464]: 2025-12-03 18:07:24.804725463 +0000 UTC m=+0.344074393 container remove 83982d5441e7f0eaa58059f1caa6f066fd682b7af677505677fac6652602844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:07:24 compute-0 systemd[1]: libpod-conmon-83982d5441e7f0eaa58059f1caa6f066fd682b7af677505677fac6652602844f.scope: Deactivated successfully.
Dec  3 18:07:25 compute-0 podman[227502]: 2025-12-03 18:07:25.025837904 +0000 UTC m=+0.058067984 container create 69cd5900d858da9088e28cd8537df02c69617ffc99e4b9ffade20c153cc3d5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:07:25 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  3 18:07:25 compute-0 podman[227502]: 2025-12-03 18:07:25.006014662 +0000 UTC m=+0.038244762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:07:25 compute-0 systemd[1]: Started libpod-conmon-69cd5900d858da9088e28cd8537df02c69617ffc99e4b9ffade20c153cc3d5a4.scope.
Dec  3 18:07:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ee01c373aac9c360030f85b8dd44b5e18f6ddf3fddcc83cd75918f29ccf57d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ee01c373aac9c360030f85b8dd44b5e18f6ddf3fddcc83cd75918f29ccf57d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ee01c373aac9c360030f85b8dd44b5e18f6ddf3fddcc83cd75918f29ccf57d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ee01c373aac9c360030f85b8dd44b5e18f6ddf3fddcc83cd75918f29ccf57d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ee01c373aac9c360030f85b8dd44b5e18f6ddf3fddcc83cd75918f29ccf57d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:25 compute-0 podman[227502]: 2025-12-03 18:07:25.193667978 +0000 UTC m=+0.225898088 container init 69cd5900d858da9088e28cd8537df02c69617ffc99e4b9ffade20c153cc3d5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:07:25 compute-0 podman[227502]: 2025-12-03 18:07:25.21591217 +0000 UTC m=+0.248142290 container start 69cd5900d858da9088e28cd8537df02c69617ffc99e4b9ffade20c153cc3d5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:07:25 compute-0 podman[227502]: 2025-12-03 18:07:25.223235388 +0000 UTC m=+0.255465498 container attach 69cd5900d858da9088e28cd8537df02c69617ffc99e4b9ffade20c153cc3d5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:07:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:25 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Dec  3 18:07:25 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  3 18:07:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec  3 18:07:26 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  3 18:07:26 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  3 18:07:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec  3 18:07:26 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec  3 18:07:26 compute-0 trusting_haslett[227519]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:07:26 compute-0 trusting_haslett[227519]: --> relative data size: 1.0
Dec  3 18:07:26 compute-0 trusting_haslett[227519]: --> All data devices are unavailable
Dec  3 18:07:26 compute-0 systemd[1]: libpod-69cd5900d858da9088e28cd8537df02c69617ffc99e4b9ffade20c153cc3d5a4.scope: Deactivated successfully.
Dec  3 18:07:26 compute-0 systemd[1]: libpod-69cd5900d858da9088e28cd8537df02c69617ffc99e4b9ffade20c153cc3d5a4.scope: Consumed 1.103s CPU time.
Dec  3 18:07:26 compute-0 podman[227502]: 2025-12-03 18:07:26.391653201 +0000 UTC m=+1.423883321 container died 69cd5900d858da9088e28cd8537df02c69617ffc99e4b9ffade20c153cc3d5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:07:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-80ee01c373aac9c360030f85b8dd44b5e18f6ddf3fddcc83cd75918f29ccf57d-merged.mount: Deactivated successfully.
Dec  3 18:07:26 compute-0 podman[227502]: 2025-12-03 18:07:26.485988796 +0000 UTC m=+1.518218876 container remove 69cd5900d858da9088e28cd8537df02c69617ffc99e4b9ffade20c153cc3d5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haslett, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:07:26 compute-0 systemd[1]: libpod-conmon-69cd5900d858da9088e28cd8537df02c69617ffc99e4b9ffade20c153cc3d5a4.scope: Deactivated successfully.
Dec  3 18:07:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:26 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec  3 18:07:26 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec  3 18:07:27 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  3 18:07:27 compute-0 podman[227704]: 2025-12-03 18:07:27.518651636 +0000 UTC m=+0.124922801 container create 2c7c72e50b4354f55e52f6e6014ccf2ad3906766397bbee828fa9f6e9747fb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:07:27 compute-0 podman[227704]: 2025-12-03 18:07:27.431319701 +0000 UTC m=+0.037590926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:07:27 compute-0 systemd[1]: Started libpod-conmon-2c7c72e50b4354f55e52f6e6014ccf2ad3906766397bbee828fa9f6e9747fb45.scope.
Dec  3 18:07:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:07:27 compute-0 podman[227704]: 2025-12-03 18:07:27.666149715 +0000 UTC m=+0.272420900 container init 2c7c72e50b4354f55e52f6e6014ccf2ad3906766397bbee828fa9f6e9747fb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:07:27 compute-0 podman[227704]: 2025-12-03 18:07:27.684761298 +0000 UTC m=+0.291032473 container start 2c7c72e50b4354f55e52f6e6014ccf2ad3906766397bbee828fa9f6e9747fb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:07:27 compute-0 podman[227704]: 2025-12-03 18:07:27.694574717 +0000 UTC m=+0.300845882 container attach 2c7c72e50b4354f55e52f6e6014ccf2ad3906766397bbee828fa9f6e9747fb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 18:07:27 compute-0 focused_euclid[227719]: 167 167
Dec  3 18:07:27 compute-0 systemd[1]: libpod-2c7c72e50b4354f55e52f6e6014ccf2ad3906766397bbee828fa9f6e9747fb45.scope: Deactivated successfully.
Dec  3 18:07:27 compute-0 podman[227704]: 2025-12-03 18:07:27.701317361 +0000 UTC m=+0.307588536 container died 2c7c72e50b4354f55e52f6e6014ccf2ad3906766397bbee828fa9f6e9747fb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 18:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-576c0d1fb2f50d24c714f7a63794ad6d4221f8f0a11387a23eac70834f6a5e88-merged.mount: Deactivated successfully.
Dec  3 18:07:27 compute-0 podman[227704]: 2025-12-03 18:07:27.803292713 +0000 UTC m=+0.409563868 container remove 2c7c72e50b4354f55e52f6e6014ccf2ad3906766397bbee828fa9f6e9747fb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 18:07:27 compute-0 systemd[1]: libpod-conmon-2c7c72e50b4354f55e52f6e6014ccf2ad3906766397bbee828fa9f6e9747fb45.scope: Deactivated successfully.
Dec  3 18:07:27 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec  3 18:07:27 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec  3 18:07:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Dec  3 18:07:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  3 18:07:27 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Dec  3 18:07:27 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Dec  3 18:07:28 compute-0 podman[227742]: 2025-12-03 18:07:28.094575471 +0000 UTC m=+0.074153456 container create cf940c0e2547c10a6796577ffdffa41e1a20225043da7ef298472576a9e049cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 18:07:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec  3 18:07:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  3 18:07:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  3 18:07:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec  3 18:07:28 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec  3 18:07:28 compute-0 podman[227742]: 2025-12-03 18:07:28.070321901 +0000 UTC m=+0.049899896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:07:28 compute-0 systemd[1]: Started libpod-conmon-cf940c0e2547c10a6796577ffdffa41e1a20225043da7ef298472576a9e049cd.scope.
Dec  3 18:07:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6139781d4828526d018222936547bc946eba8fdf3185106daf85292106184d4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6139781d4828526d018222936547bc946eba8fdf3185106daf85292106184d4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6139781d4828526d018222936547bc946eba8fdf3185106daf85292106184d4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6139781d4828526d018222936547bc946eba8fdf3185106daf85292106184d4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:28 compute-0 podman[227742]: 2025-12-03 18:07:28.276252662 +0000 UTC m=+0.255830697 container init cf940c0e2547c10a6796577ffdffa41e1a20225043da7ef298472576a9e049cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wescoff, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 18:07:28 compute-0 podman[227742]: 2025-12-03 18:07:28.289423962 +0000 UTC m=+0.269001917 container start cf940c0e2547c10a6796577ffdffa41e1a20225043da7ef298472576a9e049cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wescoff, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:07:28 compute-0 podman[227742]: 2025-12-03 18:07:28.293730627 +0000 UTC m=+0.273308672 container attach cf940c0e2547c10a6796577ffdffa41e1a20225043da7ef298472576a9e049cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:07:28 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec  3 18:07:28 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec  3 18:07:29 compute-0 focused_wescoff[227757]: {
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:    "0": [
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:        {
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "devices": [
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "/dev/loop3"
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            ],
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_name": "ceph_lv0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_size": "21470642176",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "name": "ceph_lv0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "tags": {
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.cluster_name": "ceph",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.crush_device_class": "",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.encrypted": "0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.osd_id": "0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.type": "block",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.vdo": "0"
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            },
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "type": "block",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "vg_name": "ceph_vg0"
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:        }
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:    ],
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:    "1": [
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:        {
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "devices": [
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "/dev/loop4"
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            ],
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_name": "ceph_lv1",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_size": "21470642176",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "name": "ceph_lv1",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "tags": {
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.cluster_name": "ceph",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.crush_device_class": "",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.encrypted": "0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.osd_id": "1",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.type": "block",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.vdo": "0"
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            },
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "type": "block",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "vg_name": "ceph_vg1"
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:        }
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:    ],
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:    "2": [
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:        {
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "devices": [
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "/dev/loop5"
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            ],
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_name": "ceph_lv2",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_size": "21470642176",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "name": "ceph_lv2",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "tags": {
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.cluster_name": "ceph",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.crush_device_class": "",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.encrypted": "0",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.osd_id": "2",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.type": "block",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:                "ceph.vdo": "0"
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            },
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "type": "block",
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:            "vg_name": "ceph_vg2"
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:        }
Dec  3 18:07:29 compute-0 focused_wescoff[227757]:    ]
Dec  3 18:07:29 compute-0 focused_wescoff[227757]: }
Dec  3 18:07:29 compute-0 systemd[1]: libpod-cf940c0e2547c10a6796577ffdffa41e1a20225043da7ef298472576a9e049cd.scope: Deactivated successfully.
Dec  3 18:07:29 compute-0 podman[227742]: 2025-12-03 18:07:29.124383761 +0000 UTC m=+1.103961746 container died cf940c0e2547c10a6796577ffdffa41e1a20225043da7ef298472576a9e049cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:07:29 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  3 18:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-6139781d4828526d018222936547bc946eba8fdf3185106daf85292106184d4e-merged.mount: Deactivated successfully.
Dec  3 18:07:29 compute-0 podman[227742]: 2025-12-03 18:07:29.212517095 +0000 UTC m=+1.192095070 container remove cf940c0e2547c10a6796577ffdffa41e1a20225043da7ef298472576a9e049cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wescoff, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:07:29 compute-0 systemd[1]: libpod-conmon-cf940c0e2547c10a6796577ffdffa41e1a20225043da7ef298472576a9e049cd.scope: Deactivated successfully.
Dec  3 18:07:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 93 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=10.777581215s) [2] r=-1 lpr=93 pi=[62,93)/1 crt=51'389 mlcod 0'0 active pruub 200.494186401s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:29 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 93 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=10.777521133s) [2] r=-1 lpr=93 pi=[62,93)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 200.494186401s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:29 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=93) [2] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:29 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Dec  3 18:07:29 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Dec  3 18:07:29 compute-0 podman[158200]: time="2025-12-03T18:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:07:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:07:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6795 "" "Go-http-client/1.1"
Dec  3 18:07:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Dec  3 18:07:29 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  3 18:07:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec  3 18:07:30 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  3 18:07:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec  3 18:07:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  3 18:07:30 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec  3 18:07:30 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:30 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:30 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 94 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=94) [2]/[0] r=0 lpr=94 pi=[62,94)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:30 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 94 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=94) [2]/[0] r=0 lpr=94 pi=[62,94)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:30 compute-0 podman[227926]: 2025-12-03 18:07:30.299633881 +0000 UTC m=+0.091419156 container create 54bdc20dd3a2c488cb0a9330c8d9acda90d81775ddaf62ac84d04669d227d083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 18:07:30 compute-0 podman[227926]: 2025-12-03 18:07:30.264899335 +0000 UTC m=+0.056684650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:07:30 compute-0 systemd[1]: Started libpod-conmon-54bdc20dd3a2c488cb0a9330c8d9acda90d81775ddaf62ac84d04669d227d083.scope.
Dec  3 18:07:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:07:30 compute-0 podman[227926]: 2025-12-03 18:07:30.420670595 +0000 UTC m=+0.212455860 container init 54bdc20dd3a2c488cb0a9330c8d9acda90d81775ddaf62ac84d04669d227d083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:07:30 compute-0 podman[227926]: 2025-12-03 18:07:30.436112411 +0000 UTC m=+0.227897656 container start 54bdc20dd3a2c488cb0a9330c8d9acda90d81775ddaf62ac84d04669d227d083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:07:30 compute-0 podman[227926]: 2025-12-03 18:07:30.441240476 +0000 UTC m=+0.233025751 container attach 54bdc20dd3a2c488cb0a9330c8d9acda90d81775ddaf62ac84d04669d227d083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:07:30 compute-0 strange_villani[227948]: 167 167
Dec  3 18:07:30 compute-0 systemd[1]: libpod-54bdc20dd3a2c488cb0a9330c8d9acda90d81775ddaf62ac84d04669d227d083.scope: Deactivated successfully.
Dec  3 18:07:30 compute-0 podman[227926]: 2025-12-03 18:07:30.44590678 +0000 UTC m=+0.237692055 container died 54bdc20dd3a2c488cb0a9330c8d9acda90d81775ddaf62ac84d04669d227d083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e501774fc705fca3316926077498a21430da7ddc7c179843773662481c34dd0a-merged.mount: Deactivated successfully.
Dec  3 18:07:30 compute-0 podman[227926]: 2025-12-03 18:07:30.532536878 +0000 UTC m=+0.324322153 container remove 54bdc20dd3a2c488cb0a9330c8d9acda90d81775ddaf62ac84d04669d227d083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:07:30 compute-0 systemd[1]: libpod-conmon-54bdc20dd3a2c488cb0a9330c8d9acda90d81775ddaf62ac84d04669d227d083.scope: Deactivated successfully.
Dec  3 18:07:30 compute-0 podman[227976]: 2025-12-03 18:07:30.810868021 +0000 UTC m=+0.087443669 container create 95421d217957001be274d37c31b004cd3b2fa68953b1fef70da4e12ed430559f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:07:30 compute-0 podman[227976]: 2025-12-03 18:07:30.77753042 +0000 UTC m=+0.054106148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:07:30 compute-0 systemd[1]: Started libpod-conmon-95421d217957001be274d37c31b004cd3b2fa68953b1fef70da4e12ed430559f.scope.
Dec  3 18:07:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b7f206a0afded349037ceec1044ea22943a555699b0680551aec36ef0e534a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b7f206a0afded349037ceec1044ea22943a555699b0680551aec36ef0e534a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b7f206a0afded349037ceec1044ea22943a555699b0680551aec36ef0e534a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b7f206a0afded349037ceec1044ea22943a555699b0680551aec36ef0e534a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:07:30 compute-0 podman[227976]: 2025-12-03 18:07:30.981230537 +0000 UTC m=+0.257806245 container init 95421d217957001be274d37c31b004cd3b2fa68953b1fef70da4e12ed430559f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:07:30 compute-0 podman[227976]: 2025-12-03 18:07:30.996626642 +0000 UTC m=+0.273202310 container start 95421d217957001be274d37c31b004cd3b2fa68953b1fef70da4e12ed430559f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 18:07:31 compute-0 podman[227976]: 2025-12-03 18:07:31.003323124 +0000 UTC m=+0.279898792 container attach 95421d217957001be274d37c31b004cd3b2fa68953b1fef70da4e12ed430559f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:07:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec  3 18:07:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec  3 18:07:31 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec  3 18:07:31 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  3 18:07:31 compute-0 openstack_network_exporter[160319]: ERROR   18:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:07:31 compute-0 openstack_network_exporter[160319]: ERROR   18:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:07:31 compute-0 openstack_network_exporter[160319]: ERROR   18:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:07:31 compute-0 openstack_network_exporter[160319]: ERROR   18:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:07:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:07:31 compute-0 openstack_network_exporter[160319]: ERROR   18:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:07:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:07:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 95 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=94/95 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[62,94)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v215: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec  3 18:07:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  3 18:07:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec  3 18:07:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  3 18:07:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec  3 18:07:31 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec  3 18:07:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 96 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=94/62 les/c/f=95/63/0 sis=96) [2] r=0 lpr=96 pi=[62,96)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:31 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 96 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=94/62 les/c/f=95/63/0 sis=96) [2] r=0 lpr=96 pi=[62,96)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 96 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=94/95 n=5 ec=54/45 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=15.662326813s) [2] async=[2] r=-1 lpr=96 pi=[62,96)/1 crt=51'389 mlcod 51'389 active pruub 208.015579224s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 96 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=94/95 n=5 ec=54/45 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=15.662241936s) [2] r=-1 lpr=96 pi=[62,96)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 208.015579224s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 96 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=96 pruub=8.137102127s) [1] r=-1 lpr=96 pi=[62,96)/1 crt=51'389 mlcod 0'0 active pruub 200.494186401s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:31 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 96 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=96 pruub=8.137040138s) [1] r=-1 lpr=96 pi=[62,96)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 200.494186401s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:31 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=96) [1] r=0 lpr=96 pi=[62,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:31 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Dec  3 18:07:31 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Dec  3 18:07:32 compute-0 laughing_edison[227992]: {
Dec  3 18:07:32 compute-0 laughing_edison[227992]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "osd_id": 1,
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "type": "bluestore"
Dec  3 18:07:32 compute-0 laughing_edison[227992]:    },
Dec  3 18:07:32 compute-0 laughing_edison[227992]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "osd_id": 2,
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "type": "bluestore"
Dec  3 18:07:32 compute-0 laughing_edison[227992]:    },
Dec  3 18:07:32 compute-0 laughing_edison[227992]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "osd_id": 0,
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:07:32 compute-0 laughing_edison[227992]:        "type": "bluestore"
Dec  3 18:07:32 compute-0 laughing_edison[227992]:    }
Dec  3 18:07:32 compute-0 laughing_edison[227992]: }
Dec  3 18:07:32 compute-0 systemd[1]: libpod-95421d217957001be274d37c31b004cd3b2fa68953b1fef70da4e12ed430559f.scope: Deactivated successfully.
Dec  3 18:07:32 compute-0 systemd[1]: libpod-95421d217957001be274d37c31b004cd3b2fa68953b1fef70da4e12ed430559f.scope: Consumed 1.113s CPU time.
Dec  3 18:07:32 compute-0 podman[227976]: 2025-12-03 18:07:32.143072269 +0000 UTC m=+1.419647917 container died 95421d217957001be274d37c31b004cd3b2fa68953b1fef70da4e12ed430559f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 18:07:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  3 18:07:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  3 18:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6b7f206a0afded349037ceec1044ea22943a555699b0680551aec36ef0e534a-merged.mount: Deactivated successfully.
Dec  3 18:07:32 compute-0 podman[227976]: 2025-12-03 18:07:32.433979099 +0000 UTC m=+1.710554747 container remove 95421d217957001be274d37c31b004cd3b2fa68953b1fef70da4e12ed430559f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:07:32 compute-0 systemd[1]: libpod-conmon-95421d217957001be274d37c31b004cd3b2fa68953b1fef70da4e12ed430559f.scope: Deactivated successfully.
Dec  3 18:07:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:07:32 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:07:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:07:32 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:07:32 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 7ad5352a-41be-4c7f-bcf7-c6aca5736d8c does not exist
Dec  3 18:07:32 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 34411452-b602-4ec4-b3ff-3432af0ab834 does not exist
Dec  3 18:07:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec  3 18:07:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec  3 18:07:32 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec  3 18:07:32 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec  3 18:07:32 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec  3 18:07:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 97 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=97) [1]/[0] r=-1 lpr=97 pi=[62,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:32 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 97 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=97) [1]/[0] r=-1 lpr=97 pi=[62,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:32 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 97 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=96/97 n=5 ec=54/45 lis/c=94/62 les/c/f=95/63/0 sis=96) [2] r=0 lpr=96 pi=[62,96)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 97 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=97) [1]/[0] r=0 lpr=97 pi=[62,97)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:32 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 97 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=97) [1]/[0] r=0 lpr=97 pi=[62,97)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:33 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:07:33 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:07:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v218: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec  3 18:07:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec  3 18:07:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec  3 18:07:33 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec  3 18:07:33 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 98 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=97/98 n=5 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[62,97)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:34 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec  3 18:07:34 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec  3 18:07:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec  3 18:07:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec  3 18:07:34 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec  3 18:07:34 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 99 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=97/98 n=5 ec=54/45 lis/c=97/62 les/c/f=98/63/0 sis=99 pruub=15.006257057s) [1] async=[1] r=-1 lpr=99 pi=[62,99)/1 crt=51'389 mlcod 51'389 active pruub 210.379028320s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:34 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 99 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=97/98 n=5 ec=54/45 lis/c=97/62 les/c/f=98/63/0 sis=99 pruub=15.006033897s) [1] r=-1 lpr=99 pi=[62,99)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 210.379028320s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 99 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=97/62 les/c/f=98/63/0 sis=99) [1] r=0 lpr=99 pi=[62,99)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:34 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 99 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=97/62 les/c/f=98/63/0 sis=99) [1] r=0 lpr=99 pi=[62,99)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:35 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec  3 18:07:35 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec  3 18:07:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v221: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Dec  3 18:07:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec  3 18:07:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec  3 18:07:35 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec  3 18:07:35 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 100 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=99/100 n=5 ec=54/45 lis/c=97/62 les/c/f=98/63/0 sis=99) [1] r=0 lpr=99 pi=[62,99)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:35 compute-0 podman[228106]: 2025-12-03 18:07:35.969836783 +0000 UTC m=+0.119595762 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:07:35 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec  3 18:07:36 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec  3 18:07:36 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec  3 18:07:36 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec  3 18:07:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Dec  3 18:07:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Dec  3 18:07:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  3 18:07:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec  3 18:07:38 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  3 18:07:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  3 18:07:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec  3 18:07:38 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec  3 18:07:38 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 101 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=72/73 n=5 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=101 pruub=8.923291206s) [0] r=-1 lpr=101 pi=[72,101)/1 crt=51'389 mlcod 0'0 active pruub 192.721023560s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:38 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 101 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=72/73 n=5 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=101 pruub=8.920324326s) [0] r=-1 lpr=101 pi=[72,101)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 192.721023560s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:38 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=101) [0] r=0 lpr=101 pi=[72,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec  3 18:07:39 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  3 18:07:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec  3 18:07:39 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec  3 18:07:39 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[72,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:39 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[72,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:39 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 102 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=72/73 n=5 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=102) [0]/[2] r=0 lpr=102 pi=[72,102)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:39 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 102 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=72/73 n=5 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=102) [0]/[2] r=0 lpr=102 pi=[72,102)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:39 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec  3 18:07:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Dec  3 18:07:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  3 18:07:39 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec  3 18:07:40 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec  3 18:07:40 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec  3 18:07:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec  3 18:07:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  3 18:07:40 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  3 18:07:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec  3 18:07:40 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec  3 18:07:40 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec  3 18:07:40 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec  3 18:07:40 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 103 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=102/103 n=5 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=102) [0]/[2] async=[0] r=0 lpr=102 pi=[72,102)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec  3 18:07:41 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  3 18:07:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec  3 18:07:41 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec  3 18:07:41 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 104 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=102/103 n=5 ec=54/45 lis/c=102/72 les/c/f=103/73/0 sis=104 pruub=15.123895645s) [0] async=[0] r=-1 lpr=104 pi=[72,104)/1 crt=51'389 mlcod 51'389 active pruub 201.999313354s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:41 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 104 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=102/103 n=5 ec=54/45 lis/c=102/72 les/c/f=103/73/0 sis=104 pruub=15.123725891s) [0] r=-1 lpr=104 pi=[72,104)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 201.999313354s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:41 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 104 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=102/72 les/c/f=103/73/0 sis=104) [0] r=0 lpr=104 pi=[72,104)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:41 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 104 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=102/72 les/c/f=103/73/0 sis=104) [0] r=0 lpr=104 pi=[72,104)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Dec  3 18:07:41 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  3 18:07:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:42 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec  3 18:07:42 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec  3 18:07:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec  3 18:07:42 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  3 18:07:42 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  3 18:07:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec  3 18:07:42 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec  3 18:07:42 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 105 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=104/105 n=5 ec=54/45 lis/c=102/72 les/c/f=103/73/0 sis=104) [0] r=0 lpr=104 pi=[72,104)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:42 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec  3 18:07:42 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec  3 18:07:43 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec  3 18:07:43 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec  3 18:07:43 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  3 18:07:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Dec  3 18:07:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  3 18:07:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:07:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:07:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:07:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:07:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:07:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:07:44 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec  3 18:07:44 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec  3 18:07:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec  3 18:07:44 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  3 18:07:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec  3 18:07:44 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec  3 18:07:44 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  3 18:07:44 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 106 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=106 pruub=12.540604591s) [2] r=-1 lpr=106 pi=[63,106)/1 crt=51'389 mlcod 0'0 active pruub 217.498977661s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:44 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 106 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=106 pruub=12.539573669s) [2] r=-1 lpr=106 pi=[63,106)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 217.498977661s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:44 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=106) [2] r=0 lpr=106 pi=[63,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:44 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec  3 18:07:44 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec  3 18:07:45 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.1d deep-scrub starts
Dec  3 18:07:45 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.1d deep-scrub ok
Dec  3 18:07:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec  3 18:07:45 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  3 18:07:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec  3 18:07:45 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec  3 18:07:45 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 107 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=107) [2]/[0] r=0 lpr=107 pi=[63,107)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:45 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 107 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=107) [2]/[0] r=0 lpr=107 pi=[63,107)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:45 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[63,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:45 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[63,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Dec  3 18:07:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Dec  3 18:07:45 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  3 18:07:45 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec  3 18:07:45 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec  3 18:07:46 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec  3 18:07:46 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec  3 18:07:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec  3 18:07:46 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  3 18:07:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec  3 18:07:46 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec  3 18:07:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  3 18:07:46 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 108 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=107/108 n=5 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[63,107)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec  3 18:07:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec  3 18:07:46 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec  3 18:07:46 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 109 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=107/108 n=5 ec=54/45 lis/c=107/63 les/c/f=108/64/0 sis=109 pruub=15.670583725s) [2] async=[2] r=-1 lpr=109 pi=[63,109)/1 crt=51'389 mlcod 51'389 active pruub 223.030578613s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:46 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 109 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=107/108 n=5 ec=54/45 lis/c=107/63 les/c/f=108/64/0 sis=109 pruub=15.669569969s) [2] r=-1 lpr=109 pi=[63,109)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 223.030578613s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:46 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 109 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=107/63 les/c/f=108/64/0 sis=109) [2] r=0 lpr=109 pi=[63,109)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:46 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 109 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=107/63 les/c/f=108/64/0 sis=109) [2] r=0 lpr=109 pi=[63,109)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:47 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  3 18:07:47 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec  3 18:07:47 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec  3 18:07:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Dec  3 18:07:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec  3 18:07:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec  3 18:07:47 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec  3 18:07:47 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 110 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=109/110 n=5 ec=54/45 lis/c=107/63 les/c/f=108/64/0 sis=109) [2] r=0 lpr=109 pi=[63,109)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:48 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec  3 18:07:48 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec  3 18:07:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v239: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 1 objects/s recovering
Dec  3 18:07:49 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec  3 18:07:49 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec  3 18:07:50 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec  3 18:07:50 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec  3 18:07:51 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Dec  3 18:07:51 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Dec  3 18:07:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v240: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  3 18:07:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:51 compute-0 podman[228171]: 2025-12-03 18:07:51.943831356 +0000 UTC m=+0.084292022 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:07:51 compute-0 podman[228164]: 2025-12-03 18:07:51.966361804 +0000 UTC m=+0.110419547 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 18:07:51 compute-0 podman[228161]: 2025-12-03 18:07:51.977490565 +0000 UTC m=+0.135076248 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 18:07:51 compute-0 podman[228162]: 2025-12-03 18:07:51.979438113 +0000 UTC m=+0.133864589 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal)
Dec  3 18:07:51 compute-0 podman[228163]: 2025-12-03 18:07:51.993367972 +0000 UTC m=+0.141415543 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 18:07:51 compute-0 podman[228177]: 2025-12-03 18:07:51.996808565 +0000 UTC m=+0.130249481 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, config_id=edpm, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, distribution-scope=public, version=9.4, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 18:07:53 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec  3 18:07:53 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec  3 18:07:53 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec  3 18:07:53 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec  3 18:07:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Dec  3 18:07:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Dec  3 18:07:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  3 18:07:54 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec  3 18:07:54 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec  3 18:07:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec  3 18:07:54 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  3 18:07:54 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  3 18:07:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec  3 18:07:54 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec  3 18:07:54 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Dec  3 18:07:54 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Dec  3 18:07:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  3 18:07:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:55 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Dec  3 18:07:55 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  3 18:07:56 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec  3 18:07:56 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec  3 18:07:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec  3 18:07:56 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  3 18:07:56 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  3 18:07:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec  3 18:07:56 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec  3 18:07:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:07:57 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 112 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=86/87 n=5 ec=54/45 lis/c=86/86 les/c/f=87/87/0 sis=112 pruub=12.249700546s) [0] r=-1 lpr=112 pi=[86,112)/1 crt=51'389 mlcod 0'0 active pruub 215.341690063s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:57 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 112 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=86/87 n=5 ec=54/45 lis/c=86/86 les/c/f=87/87/0 sis=112 pruub=12.249653816s) [0] r=-1 lpr=112 pi=[86,112)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 215.341690063s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:57 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=86/86 les/c/f=87/87/0 sis=112) [0] r=0 lpr=112 pi=[86,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec  3 18:07:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  3 18:07:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec  3 18:07:57 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec  3 18:07:57 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 113 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=86/87 n=5 ec=54/45 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:57 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 113 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=86/87 n=5 ec=54/45 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:07:57 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:57 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v246: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Dec  3 18:07:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  3 18:07:58 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec  3 18:07:58 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec  3 18:07:58 compute-0 python3.9[228433]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:07:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec  3 18:07:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  3 18:07:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec  3 18:07:58 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec  3 18:07:58 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  3 18:07:58 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 114 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=113/114 n=5 ec=54/45 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:07:59 compute-0 podman[158200]: time="2025-12-03T18:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:07:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:07:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6801 "" "Go-http-client/1.1"
Dec  3 18:07:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec  3 18:07:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec  3 18:07:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:07:59 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec  3 18:07:59 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 115 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=113/114 n=5 ec=54/45 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.989594460s) [0] async=[0] r=-1 lpr=115 pi=[86,115)/1 crt=51'389 mlcod 51'389 active pruub 220.268875122s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:59 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 115 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=113/114 n=5 ec=54/45 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.989488602s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 220.268875122s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:07:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Dec  3 18:07:59 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  3 18:07:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  3 18:07:59 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 115 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:07:59 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 115 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:08:00 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec  3 18:08:00 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec  3 18:08:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec  3 18:08:00 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  3 18:08:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec  3 18:08:00 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec  3 18:08:00 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 116 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=72/73 n=5 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=116 pruub=10.432622910s) [0] r=-1 lpr=116 pi=[72,116)/1 crt=51'389 mlcod 0'0 active pruub 216.722610474s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:08:00 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 116 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=72/73 n=5 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=116 pruub=10.432572365s) [0] r=-1 lpr=116 pi=[72,116)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 216.722610474s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:08:00 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=116) [0] r=0 lpr=116 pi=[72,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:08:00 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  3 18:08:00 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  3 18:08:00 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 116 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=115/116 n=5 ec=54/45 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:08:01 compute-0 python3.9[228720]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  3 18:08:01 compute-0 openstack_network_exporter[160319]: ERROR   18:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:08:01 compute-0 openstack_network_exporter[160319]: ERROR   18:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:08:01 compute-0 openstack_network_exporter[160319]: ERROR   18:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:08:01 compute-0 openstack_network_exporter[160319]: ERROR   18:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:08:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:08:01 compute-0 openstack_network_exporter[160319]: ERROR   18:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:08:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:08:01 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec  3 18:08:01 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec  3 18:08:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 18:08:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:08:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec  3 18:08:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:08:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec  3 18:08:01 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec  3 18:08:01 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[72,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:08:01 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[72,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:08:01 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 18:08:01 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 18:08:01 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 117 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=72/73 n=5 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=117) [0]/[2] r=0 lpr=117 pi=[72,117)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:08:01 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 117 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=72/73 n=5 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=117) [0]/[2] r=0 lpr=117 pi=[72,117)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:08:01 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 117 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=54/45 lis/c=73/73 les/c/f=74/74/0 sis=117 pruub=10.440087318s) [1] r=-1 lpr=117 pi=[73,117)/1 crt=51'389 mlcod 0'0 active pruub 217.762069702s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:08:01 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 117 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=54/45 lis/c=73/73 les/c/f=74/74/0 sis=117 pruub=10.440027237s) [1] r=-1 lpr=117 pi=[73,117)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 217.762069702s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:08:01 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=73/73 les/c/f=74/74/0 sis=117) [1] r=0 lpr=117 pi=[73,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:08:02 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec  3 18:08:02 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec  3 18:08:02 compute-0 python3.9[228873]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  3 18:08:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec  3 18:08:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec  3 18:08:02 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec  3 18:08:02 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=73/73 les/c/f=74/74/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[73,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:08:02 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 118 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=54/45 lis/c=73/73 les/c/f=74/74/0 sis=118) [1]/[2] r=0 lpr=118 pi=[73,118)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:08:02 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 118 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=54/45 lis/c=73/73 les/c/f=74/74/0 sis=118) [1]/[2] r=0 lpr=118 pi=[73,118)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 18:08:02 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=73/73 les/c/f=74/74/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[73,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 18:08:02 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 118 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=117/118 n=5 ec=54/45 lis/c=72/72 les/c/f=73/73/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[72,117)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:08:03 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec  3 18:08:03 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec  3 18:08:03 compute-0 python3.9[229025]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:08:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec  3 18:08:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec  3 18:08:03 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec  3 18:08:03 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 119 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=117/118 n=5 ec=54/45 lis/c=117/72 les/c/f=118/73/0 sis=119 pruub=15.000247002s) [0] async=[0] r=-1 lpr=119 pi=[72,119)/1 crt=51'389 mlcod 51'389 active pruub 224.348632812s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:08:03 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 119 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=117/118 n=5 ec=54/45 lis/c=117/72 les/c/f=118/73/0 sis=119 pruub=15.000160217s) [0] r=-1 lpr=119 pi=[72,119)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 224.348632812s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:08:03 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 119 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=117/72 les/c/f=118/73/0 sis=119) [0] r=0 lpr=119 pi=[72,119)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:08:03 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 119 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=117/72 les/c/f=118/73/0 sis=119) [0] r=0 lpr=119 pi=[72,119)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:08:03 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 119 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=118/119 n=5 ec=54/45 lis/c=73/73 les/c/f=74/74/0 sis=118) [1]/[2] async=[1] r=0 lpr=118 pi=[73,118)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:08:04 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec  3 18:08:04 compute-0 python3.9[229177]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  3 18:08:04 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec  3 18:08:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec  3 18:08:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec  3 18:08:04 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec  3 18:08:04 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 120 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=118/119 n=5 ec=54/45 lis/c=118/73 les/c/f=119/74/0 sis=120 pruub=14.983204842s) [1] async=[1] r=-1 lpr=120 pi=[73,120)/1 crt=51'389 mlcod 51'389 active pruub 225.361709595s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:08:04 compute-0 ceph-osd[208881]: osd.2 pg_epoch: 120 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=118/119 n=5 ec=54/45 lis/c=118/73 les/c/f=119/74/0 sis=120 pruub=14.979428291s) [1] r=-1 lpr=120 pi=[73,120)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 225.361709595s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 18:08:04 compute-0 ceph-osd[206694]: osd.0 pg_epoch: 120 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=119/120 n=5 ec=54/45 lis/c=117/72 les/c/f=118/73/0 sis=119) [0] r=0 lpr=119 pi=[72,119)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:08:04 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 120 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=118/73 les/c/f=119/74/0 sis=120) [1] r=0 lpr=120 pi=[73,120)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 18:08:04 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 120 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=54/45 lis/c=118/73 les/c/f=119/74/0 sis=120) [1] r=0 lpr=120 pi=[73,120)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 18:08:05 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec  3 18:08:05 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec  3 18:08:05 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec  3 18:08:05 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec  3 18:08:05 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec  3 18:08:05 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec  3 18:08:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Dec  3 18:08:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec  3 18:08:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec  3 18:08:05 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec  3 18:08:06 compute-0 ceph-osd[207851]: osd.1 pg_epoch: 121 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=120/121 n=5 ec=54/45 lis/c=118/73 les/c/f=119/74/0 sis=120) [1] r=0 lpr=120 pi=[73,120)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 18:08:06 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec  3 18:08:06 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec  3 18:08:06 compute-0 podman[229329]: 2025-12-03 18:08:06.1500051 +0000 UTC m=+0.105756225 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:08:06 compute-0 python3.9[229330]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:08:06 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.f deep-scrub starts
Dec  3 18:08:06 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.f deep-scrub ok
Dec  3 18:08:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:08:07 compute-0 python3.9[229504]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:08:07 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec  3 18:08:07 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec  3 18:08:07 compute-0 python3.9[229582]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:08:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 3 objects/s recovering
Dec  3 18:08:08 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec  3 18:08:08 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec  3 18:08:08 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Dec  3 18:08:08 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Dec  3 18:08:09 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.18 deep-scrub starts
Dec  3 18:08:09 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.18 deep-scrub ok
Dec  3 18:08:09 compute-0 python3.9[229734]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:08:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 2 objects/s recovering
Dec  3 18:08:10 compute-0 python3.9[229888]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  3 18:08:11 compute-0 python3.9[230041]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  3 18:08:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Dec  3 18:08:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:08:12 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Dec  3 18:08:12 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Dec  3 18:08:12 compute-0 python3.9[230194]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:08:13
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['volumes', '.mgr', 'images', 'vms', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Dec  3 18:08:13 compute-0 python3.9[230346]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:08:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:08:15 compute-0 python3.9[230498]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 18:08:15 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec  3 18:08:15 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec  3 18:08:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Dec  3 18:08:16 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec  3 18:08:16 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec  3 18:08:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:08:17 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec  3 18:08:17 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec  3 18:08:17 compute-0 python3.9[230651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:08:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Dec  3 18:08:18 compute-0 python3.9[230803]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:08:18 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec  3 18:08:18 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec  3 18:08:19 compute-0 python3.9[230881]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:08:19 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.1c deep-scrub starts
Dec  3 18:08:19 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.1c deep-scrub ok
Dec  3 18:08:19 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Dec  3 18:08:19 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Dec  3 18:08:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:20 compute-0 python3.9[231034]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:08:20 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec  3 18:08:20 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec  3 18:08:20 compute-0 python3.9[231112]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:08:21 compute-0 python3.9[231264]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 18:08:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:08:22 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec  3 18:08:22 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec  3 18:08:22 compute-0 podman[231266]: 2025-12-03 18:08:22.936837862 +0000 UTC m=+0.105588341 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Dec  3 18:08:22 compute-0 podman[231269]: 2025-12-03 18:08:22.942429408 +0000 UTC m=+0.100341273 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  3 18:08:22 compute-0 podman[231267]: 2025-12-03 18:08:22.97579189 +0000 UTC m=+0.141606587 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Dec  3 18:08:22 compute-0 podman[231282]: 2025-12-03 18:08:22.978973817 +0000 UTC m=+0.127877643 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, release=1214.1726694543, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, name=ubi9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 18:08:22 compute-0 podman[231268]: 2025-12-03 18:08:22.989027561 +0000 UTC m=+0.150636126 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 18:08:22 compute-0 podman[231270]: 2025-12-03 18:08:22.992381213 +0000 UTC m=+0.138310496 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:08:23 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec  3 18:08:23 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec  3 18:08:23 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec  3 18:08:23 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:08:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:24 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec  3 18:08:24 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec  3 18:08:24 compute-0 python3.9[231536]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:08:24 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec  3 18:08:24 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec  3 18:08:25 compute-0 python3.9[231688]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  3 18:08:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:26 compute-0 python3.9[231838]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:08:27 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec  3 18:08:27 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec  3 18:08:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:08:27 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.1 deep-scrub starts
Dec  3 18:08:27 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 5.1 deep-scrub ok
Dec  3 18:08:27 compute-0 python3.9[231990]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:08:27 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  3 18:08:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:27 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec  3 18:08:27 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  3 18:08:27 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  3 18:08:28 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec  3 18:08:28 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec  3 18:08:28 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  3 18:08:29 compute-0 python3.9[232151]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  3 18:08:29 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec  3 18:08:29 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec  3 18:08:29 compute-0 podman[158200]: time="2025-12-03T18:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:08:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:08:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6814 "" "Go-http-client/1.1"
Dec  3 18:08:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:30 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec  3 18:08:30 compute-0 ceph-osd[207851]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec  3 18:08:30 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec  3 18:08:30 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec  3 18:08:31 compute-0 openstack_network_exporter[160319]: ERROR   18:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:08:31 compute-0 openstack_network_exporter[160319]: ERROR   18:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:08:31 compute-0 openstack_network_exporter[160319]: ERROR   18:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:08:31 compute-0 openstack_network_exporter[160319]: ERROR   18:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:08:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:08:31 compute-0 openstack_network_exporter[160319]: ERROR   18:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:08:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:08:31 compute-0 python3.9[232303]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:08:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:08:33 compute-0 python3.9[232457]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:08:33 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec  3 18:08:33 compute-0 systemd[1]: session-42.scope: Consumed 1min 18.568s CPU time.
Dec  3 18:08:33 compute-0 systemd-logind[784]: Session 42 logged out. Waiting for processes to exit.
Dec  3 18:08:33 compute-0 systemd-logind[784]: Removed session 42.
Dec  3 18:08:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:08:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:08:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:08:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:08:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:08:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:08:33 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 1db3f3f7-991b-4683-9a41-8a3fca713561 does not exist
Dec  3 18:08:33 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4e942ff6-a282-421c-b9dd-b8d8cadbbf24 does not exist
Dec  3 18:08:33 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev a4fee3b2-1c14-495d-a47a-ac3c0303b0b2 does not exist
Dec  3 18:08:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:08:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:08:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:08:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:08:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:08:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:08:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:34 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:08:34 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:08:34 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:08:34 compute-0 podman[232752]: 2025-12-03 18:08:34.54978101 +0000 UTC m=+0.057011139 container create 9f5d4eef46792820282ce792e4349957594c40289be1b10d62e1d4ebb67f8eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Dec  3 18:08:34 compute-0 systemd[194616]: Created slice User Background Tasks Slice.
Dec  3 18:08:34 compute-0 systemd[194616]: Starting Cleanup of User's Temporary Files and Directories...
Dec  3 18:08:34 compute-0 systemd[1]: Started libpod-conmon-9f5d4eef46792820282ce792e4349957594c40289be1b10d62e1d4ebb67f8eb9.scope.
Dec  3 18:08:34 compute-0 systemd[194616]: Finished Cleanup of User's Temporary Files and Directories.
Dec  3 18:08:34 compute-0 podman[232752]: 2025-12-03 18:08:34.525716384 +0000 UTC m=+0.032946533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:08:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:08:34 compute-0 podman[232752]: 2025-12-03 18:08:34.709140998 +0000 UTC m=+0.216371147 container init 9f5d4eef46792820282ce792e4349957594c40289be1b10d62e1d4ebb67f8eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:08:34 compute-0 podman[232752]: 2025-12-03 18:08:34.728598011 +0000 UTC m=+0.235828130 container start 9f5d4eef46792820282ce792e4349957594c40289be1b10d62e1d4ebb67f8eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:08:34 compute-0 podman[232752]: 2025-12-03 18:08:34.734426513 +0000 UTC m=+0.241656642 container attach 9f5d4eef46792820282ce792e4349957594c40289be1b10d62e1d4ebb67f8eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:08:34 compute-0 hardcore_jackson[232769]: 167 167
Dec  3 18:08:34 compute-0 systemd[1]: libpod-9f5d4eef46792820282ce792e4349957594c40289be1b10d62e1d4ebb67f8eb9.scope: Deactivated successfully.
Dec  3 18:08:34 compute-0 podman[232752]: 2025-12-03 18:08:34.739776813 +0000 UTC m=+0.247006932 container died 9f5d4eef46792820282ce792e4349957594c40289be1b10d62e1d4ebb67f8eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:08:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b420b7d74d06391a578d66bb7a72df1f37e5bf4a82dda887aed9244c4427bde3-merged.mount: Deactivated successfully.
Dec  3 18:08:34 compute-0 podman[232752]: 2025-12-03 18:08:34.828212915 +0000 UTC m=+0.335443064 container remove 9f5d4eef46792820282ce792e4349957594c40289be1b10d62e1d4ebb67f8eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 18:08:34 compute-0 systemd[1]: libpod-conmon-9f5d4eef46792820282ce792e4349957594c40289be1b10d62e1d4ebb67f8eb9.scope: Deactivated successfully.
Dec  3 18:08:35 compute-0 podman[232793]: 2025-12-03 18:08:35.094595337 +0000 UTC m=+0.092294007 container create 1b8b1107a7a7180ccc63b31d5c9e43c58cdfd2e53b94cb5aaf9c1cd258449296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 18:08:35 compute-0 podman[232793]: 2025-12-03 18:08:35.048387193 +0000 UTC m=+0.046085843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:08:35 compute-0 systemd[1]: Started libpod-conmon-1b8b1107a7a7180ccc63b31d5c9e43c58cdfd2e53b94cb5aaf9c1cd258449296.scope.
Dec  3 18:08:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f30a717a5ecd30a561d9b0fa575ec6a33b5e5fc58c70185fb088d80767bef17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f30a717a5ecd30a561d9b0fa575ec6a33b5e5fc58c70185fb088d80767bef17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f30a717a5ecd30a561d9b0fa575ec6a33b5e5fc58c70185fb088d80767bef17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f30a717a5ecd30a561d9b0fa575ec6a33b5e5fc58c70185fb088d80767bef17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f30a717a5ecd30a561d9b0fa575ec6a33b5e5fc58c70185fb088d80767bef17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:08:35 compute-0 podman[232793]: 2025-12-03 18:08:35.237660509 +0000 UTC m=+0.235359179 container init 1b8b1107a7a7180ccc63b31d5c9e43c58cdfd2e53b94cb5aaf9c1cd258449296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:08:35 compute-0 podman[232793]: 2025-12-03 18:08:35.262246357 +0000 UTC m=+0.259944977 container start 1b8b1107a7a7180ccc63b31d5c9e43c58cdfd2e53b94cb5aaf9c1cd258449296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:08:35 compute-0 podman[232793]: 2025-12-03 18:08:35.267147307 +0000 UTC m=+0.264846007 container attach 1b8b1107a7a7180ccc63b31d5c9e43c58cdfd2e53b94cb5aaf9c1cd258449296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:08:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:36 compute-0 optimistic_mccarthy[232809]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:08:36 compute-0 optimistic_mccarthy[232809]: --> relative data size: 1.0
Dec  3 18:08:36 compute-0 optimistic_mccarthy[232809]: --> All data devices are unavailable
Dec  3 18:08:36 compute-0 systemd[1]: libpod-1b8b1107a7a7180ccc63b31d5c9e43c58cdfd2e53b94cb5aaf9c1cd258449296.scope: Deactivated successfully.
Dec  3 18:08:36 compute-0 systemd[1]: libpod-1b8b1107a7a7180ccc63b31d5c9e43c58cdfd2e53b94cb5aaf9c1cd258449296.scope: Consumed 1.090s CPU time.
Dec  3 18:08:36 compute-0 podman[232793]: 2025-12-03 18:08:36.684917928 +0000 UTC m=+1.682616588 container died 1b8b1107a7a7180ccc63b31d5c9e43c58cdfd2e53b94cb5aaf9c1cd258449296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Dec  3 18:08:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f30a717a5ecd30a561d9b0fa575ec6a33b5e5fc58c70185fb088d80767bef17-merged.mount: Deactivated successfully.
Dec  3 18:08:36 compute-0 podman[232793]: 2025-12-03 18:08:36.806766103 +0000 UTC m=+1.804464723 container remove 1b8b1107a7a7180ccc63b31d5c9e43c58cdfd2e53b94cb5aaf9c1cd258449296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:08:36 compute-0 systemd[1]: libpod-conmon-1b8b1107a7a7180ccc63b31d5c9e43c58cdfd2e53b94cb5aaf9c1cd258449296.scope: Deactivated successfully.
Dec  3 18:08:36 compute-0 podman[232839]: 2025-12-03 18:08:36.840586306 +0000 UTC m=+0.108701906 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:08:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:08:37 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec  3 18:08:37 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec  3 18:08:37 compute-0 podman[233009]: 2025-12-03 18:08:37.736773994 +0000 UTC m=+0.075912748 container create a47ccf4c904ac4037195f2c48b24aac3514873a7f6a5e9259a43f0822274ba2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:08:37 compute-0 systemd[1]: Started libpod-conmon-a47ccf4c904ac4037195f2c48b24aac3514873a7f6a5e9259a43f0822274ba2c.scope.
Dec  3 18:08:37 compute-0 podman[233009]: 2025-12-03 18:08:37.713180151 +0000 UTC m=+0.052318925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:08:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:08:37 compute-0 podman[233009]: 2025-12-03 18:08:37.843658896 +0000 UTC m=+0.182797640 container init a47ccf4c904ac4037195f2c48b24aac3514873a7f6a5e9259a43f0822274ba2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 18:08:37 compute-0 podman[233009]: 2025-12-03 18:08:37.851550137 +0000 UTC m=+0.190688861 container start a47ccf4c904ac4037195f2c48b24aac3514873a7f6a5e9259a43f0822274ba2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:08:37 compute-0 podman[233009]: 2025-12-03 18:08:37.85537893 +0000 UTC m=+0.194517684 container attach a47ccf4c904ac4037195f2c48b24aac3514873a7f6a5e9259a43f0822274ba2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:08:37 compute-0 jolly_torvalds[233024]: 167 167
Dec  3 18:08:37 compute-0 systemd[1]: libpod-a47ccf4c904ac4037195f2c48b24aac3514873a7f6a5e9259a43f0822274ba2c.scope: Deactivated successfully.
Dec  3 18:08:37 compute-0 podman[233009]: 2025-12-03 18:08:37.8590496 +0000 UTC m=+0.198188324 container died a47ccf4c904ac4037195f2c48b24aac3514873a7f6a5e9259a43f0822274ba2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 18:08:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-25b31abd04e23667fe0ec05482d8fdbe68d7c789bb7d733fb6d0dcebf0acacfc-merged.mount: Deactivated successfully.
Dec  3 18:08:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:08:37 compute-0 podman[233009]: 2025-12-03 18:08:37.922436892 +0000 UTC m=+0.261575616 container remove a47ccf4c904ac4037195f2c48b24aac3514873a7f6a5e9259a43f0822274ba2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:08:37 compute-0 systemd[1]: libpod-conmon-a47ccf4c904ac4037195f2c48b24aac3514873a7f6a5e9259a43f0822274ba2c.scope: Deactivated successfully.
Dec  3 18:08:38 compute-0 podman[233047]: 2025-12-03 18:08:38.112703433 +0000 UTC m=+0.054688802 container create 1b11bf8b2f8691f55fd018eca724b888be560f3df0bf6f604e9e85b743c6ff25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 18:08:38 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec  3 18:08:38 compute-0 ceph-osd[208881]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec  3 18:08:38 compute-0 systemd[1]: Started libpod-conmon-1b11bf8b2f8691f55fd018eca724b888be560f3df0bf6f604e9e85b743c6ff25.scope.
Dec  3 18:08:38 compute-0 podman[233047]: 2025-12-03 18:08:38.09330535 +0000 UTC m=+0.035290739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:08:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7e9eb165980f31f65cc036f37ef5019ef4e269f08d3bb79fb4d69d3c80f21a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7e9eb165980f31f65cc036f37ef5019ef4e269f08d3bb79fb4d69d3c80f21a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7e9eb165980f31f65cc036f37ef5019ef4e269f08d3bb79fb4d69d3c80f21a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7e9eb165980f31f65cc036f37ef5019ef4e269f08d3bb79fb4d69d3c80f21a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:08:38 compute-0 podman[233047]: 2025-12-03 18:08:38.240313138 +0000 UTC m=+0.182298537 container init 1b11bf8b2f8691f55fd018eca724b888be560f3df0bf6f604e9e85b743c6ff25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:08:38 compute-0 podman[233047]: 2025-12-03 18:08:38.264328182 +0000 UTC m=+0.206313591 container start 1b11bf8b2f8691f55fd018eca724b888be560f3df0bf6f604e9e85b743c6ff25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 18:08:38 compute-0 podman[233047]: 2025-12-03 18:08:38.27165295 +0000 UTC m=+0.213638329 container attach 1b11bf8b2f8691f55fd018eca724b888be560f3df0bf6f604e9e85b743c6ff25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Dec  3 18:08:38 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec  3 18:08:38 compute-0 ceph-osd[206694]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec  3 18:08:39 compute-0 sharp_shamir[233064]: {
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:    "0": [
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:        {
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "devices": [
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "/dev/loop3"
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            ],
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_name": "ceph_lv0",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_size": "21470642176",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "name": "ceph_lv0",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "tags": {
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.cluster_name": "ceph",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.crush_device_class": "",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.encrypted": "0",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.osd_id": "0",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.type": "block",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.vdo": "0"
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            },
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "type": "block",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "vg_name": "ceph_vg0"
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:        }
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:    ],
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:    "1": [
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:        {
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "devices": [
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "/dev/loop4"
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            ],
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_name": "ceph_lv1",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_size": "21470642176",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "name": "ceph_lv1",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "tags": {
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.cluster_name": "ceph",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.crush_device_class": "",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.encrypted": "0",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.osd_id": "1",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.type": "block",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.vdo": "0"
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            },
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "type": "block",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "vg_name": "ceph_vg1"
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:        }
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:    ],
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:    "2": [
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:        {
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "devices": [
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "/dev/loop5"
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            ],
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_name": "ceph_lv2",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_size": "21470642176",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "name": "ceph_lv2",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:            "tags": {
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.cluster_name": "ceph",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.crush_device_class": "",
Dec  3 18:08:39 compute-0 sharp_shamir[233064]:                "ceph.encrypted": "0",
Dec  3 18:10:59 compute-0 podman[248059]: 2025-12-03 18:10:59.739098357 +0000 UTC m=+0.195253895 container start 57e359d20a4e05dc658b97b89b194588c345f38784ac3d142b43bd2ddb3d496a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_jennings, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:10:59 compute-0 podman[248059]: 2025-12-03 18:10:59.743821483 +0000 UTC m=+0.199977031 container attach 57e359d20a4e05dc658b97b89b194588c345f38784ac3d142b43bd2ddb3d496a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:10:59 compute-0 podman[158200]: time="2025-12-03T18:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:10:59 compute-0 inspiring_jennings[248076]: 167 167
Dec  3 18:10:59 compute-0 systemd[1]: libpod-57e359d20a4e05dc658b97b89b194588c345f38784ac3d142b43bd2ddb3d496a.scope: Deactivated successfully.
Dec  3 18:10:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34198 "" "Go-http-client/1.1"
Dec  3 18:10:59 compute-0 podman[248059]: 2025-12-03 18:10:59.755060027 +0000 UTC m=+0.211215575 container died 57e359d20a4e05dc658b97b89b194588c345f38784ac3d142b43bd2ddb3d496a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_jennings, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3264043d234904173123715365cec2b7537c7e89ad722bace29fefad5ebef120-merged.mount: Deactivated successfully.
Dec  3 18:10:59 compute-0 podman[248059]: 2025-12-03 18:10:59.833271875 +0000 UTC m=+0.289427413 container remove 57e359d20a4e05dc658b97b89b194588c345f38784ac3d142b43bd2ddb3d496a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_jennings, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:10:59 compute-0 systemd[1]: libpod-conmon-57e359d20a4e05dc658b97b89b194588c345f38784ac3d142b43bd2ddb3d496a.scope: Deactivated successfully.
Dec  3 18:10:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6832 "" "Go-http-client/1.1"
Dec  3 18:10:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:00 compute-0 rsyslogd[188590]: imjournal: 1961 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  3 18:11:00 compute-0 podman[248151]: 2025-12-03 18:11:00.098342493 +0000 UTC m=+0.082989316 container create 36892112115bc9f701157e307584016bf693ebac3531bbb7ea9f7ce10dc30381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 18:11:00 compute-0 podman[248151]: 2025-12-03 18:11:00.069872359 +0000 UTC m=+0.054519182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:11:00 compute-0 systemd[1]: Started libpod-conmon-36892112115bc9f701157e307584016bf693ebac3531bbb7ea9f7ce10dc30381.scope.
Dec  3 18:11:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:11:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212174e24793849f3e9f37ac563e48f55425c0a654e700ffadc60c83ffbc2b3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:11:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212174e24793849f3e9f37ac563e48f55425c0a654e700ffadc60c83ffbc2b3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:11:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212174e24793849f3e9f37ac563e48f55425c0a654e700ffadc60c83ffbc2b3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:11:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212174e24793849f3e9f37ac563e48f55425c0a654e700ffadc60c83ffbc2b3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:11:00 compute-0 podman[248151]: 2025-12-03 18:11:00.256697817 +0000 UTC m=+0.241344650 container init 36892112115bc9f701157e307584016bf693ebac3531bbb7ea9f7ce10dc30381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 18:11:00 compute-0 podman[248151]: 2025-12-03 18:11:00.273430505 +0000 UTC m=+0.258077328 container start 36892112115bc9f701157e307584016bf693ebac3531bbb7ea9f7ce10dc30381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:11:00 compute-0 podman[248151]: 2025-12-03 18:11:00.278414466 +0000 UTC m=+0.263061289 container attach 36892112115bc9f701157e307584016bf693ebac3531bbb7ea9f7ce10dc30381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:11:00 compute-0 python3.9[248192]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:01 compute-0 python3.9[248365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:11:01 compute-0 openstack_network_exporter[160319]: ERROR   18:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:11:01 compute-0 openstack_network_exporter[160319]: ERROR   18:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:11:01 compute-0 openstack_network_exporter[160319]: ERROR   18:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:11:01 compute-0 openstack_network_exporter[160319]: ERROR   18:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:11:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:11:01 compute-0 openstack_network_exporter[160319]: ERROR   18:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:11:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:11:01 compute-0 goofy_shannon[248195]: {
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "osd_id": 1,
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "type": "bluestore"
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:    },
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "osd_id": 2,
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "type": "bluestore"
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:    },
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "osd_id": 0,
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:        "type": "bluestore"
Dec  3 18:11:01 compute-0 goofy_shannon[248195]:    }
Dec  3 18:11:01 compute-0 goofy_shannon[248195]: }
Dec  3 18:11:01 compute-0 systemd[1]: libpod-36892112115bc9f701157e307584016bf693ebac3531bbb7ea9f7ce10dc30381.scope: Deactivated successfully.
Dec  3 18:11:01 compute-0 systemd[1]: libpod-36892112115bc9f701157e307584016bf693ebac3531bbb7ea9f7ce10dc30381.scope: Consumed 1.212s CPU time.
Dec  3 18:11:01 compute-0 podman[248151]: 2025-12-03 18:11:01.493583577 +0000 UTC m=+1.478230380 container died 36892112115bc9f701157e307584016bf693ebac3531bbb7ea9f7ce10dc30381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-212174e24793849f3e9f37ac563e48f55425c0a654e700ffadc60c83ffbc2b3f-merged.mount: Deactivated successfully.
Dec  3 18:11:01 compute-0 podman[248151]: 2025-12-03 18:11:01.578121231 +0000 UTC m=+1.562768014 container remove 36892112115bc9f701157e307584016bf693ebac3531bbb7ea9f7ce10dc30381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:11:01 compute-0 systemd[1]: libpod-conmon-36892112115bc9f701157e307584016bf693ebac3531bbb7ea9f7ce10dc30381.scope: Deactivated successfully.
Dec  3 18:11:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:11:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:11:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:11:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:11:01 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 2092e67b-064c-4477-9b77-f98fa47ae0f7 does not exist
Dec  3 18:11:01 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f1bbb404-c7cd-494c-9843-5fa5a7277544 does not exist
Dec  3 18:11:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:02 compute-0 python3.9[248516]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.460r3yu4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:11:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:11:02 compute-0 python3.9[248672]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:11:03 compute-0 python3.9[248750]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.699 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.701 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.714 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.715 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.715 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.715 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.716 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:11:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:11:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:04 compute-0 python3.9[248903]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:11:05 compute-0 python3[249056]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 18:11:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:06 compute-0 python3.9[249209]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:11:07 compute-0 python3.9[249288]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:08 compute-0 python3.9[249440]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:11:08 compute-0 python3.9[249518]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:09 compute-0 podman[249642]: 2025-12-03 18:11:09.708309879 +0000 UTC m=+0.098463013 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:11:09 compute-0 python3.9[249693]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:11:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:10 compute-0 python3.9[249772]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:11 compute-0 python3.9[249924]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:11:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:12 compute-0 python3.9[250002]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:13 compute-0 python3.9[250154]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:11:13 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec  3 18:11:13 compute-0 systemd-logind[784]: Session 25 logged out. Waiting for processes to exit.
Dec  3 18:11:13 compute-0 systemd[1]: session-25.scope: Consumed 2min 36.232s CPU time.
Dec  3 18:11:13 compute-0 systemd-logind[784]: Removed session 25.
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:11:13
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'default.rgw.control', 'images', '.mgr', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.log']
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:11:13 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:11:13 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:11:13 compute-0 python3.9[250232]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:11:13 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:11:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:11:14 compute-0 python3.9[250385]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:11:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:16 compute-0 python3.9[250540]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:19 compute-0 python3.9[250692]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:20 compute-0 python3.9[250845]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:21 compute-0 python3.9[250997]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  3 18:11:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:22 compute-0 python3.9[251149]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  3 18:11:22 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Dec  3 18:11:22 compute-0 systemd[1]: session-47.scope: Consumed 44.490s CPU time.
Dec  3 18:11:22 compute-0 systemd-logind[784]: Session 47 logged out. Waiting for processes to exit.
Dec  3 18:11:22 compute-0 systemd-logind[784]: Removed session 47.
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:11:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:26 compute-0 podman[251174]: 2025-12-03 18:11:26.980348475 +0000 UTC m=+0.121704370 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:11:26 compute-0 podman[251178]: 2025-12-03 18:11:26.98674009 +0000 UTC m=+0.129687033 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:11:26 compute-0 podman[251177]: 2025-12-03 18:11:26.991583226 +0000 UTC m=+0.133022883 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:11:27 compute-0 podman[251175]: 2025-12-03 18:11:26.999940788 +0000 UTC m=+0.141858976 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 18:11:27 compute-0 podman[251179]: 2025-12-03 18:11:27.005470162 +0000 UTC m=+0.142225055 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.tags=base rhel9, name=ubi9)
Dec  3 18:11:27 compute-0 podman[251176]: 2025-12-03 18:11:27.011951298 +0000 UTC m=+0.146834166 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:11:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:27 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  3 18:11:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:28 compute-0 systemd-logind[784]: New session 48 of user zuul.
Dec  3 18:11:28 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec  3 18:11:29 compute-0 python3.9[251450]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  3 18:11:29 compute-0 podman[158200]: time="2025-12-03T18:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:11:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:11:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6838 "" "Go-http-client/1.1"
Dec  3 18:11:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:30 compute-0 python3.9[251602]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:11:31 compute-0 openstack_network_exporter[160319]: ERROR   18:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:11:31 compute-0 openstack_network_exporter[160319]: ERROR   18:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:11:31 compute-0 openstack_network_exporter[160319]: ERROR   18:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:11:31 compute-0 openstack_network_exporter[160319]: ERROR   18:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:11:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:11:31 compute-0 openstack_network_exporter[160319]: ERROR   18:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:11:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:11:31 compute-0 python3.9[251756]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec  3 18:11:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:32 compute-0 python3.9[251908]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.gu95f_ff follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:11:33 compute-0 python3.9[252033]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.gu95f_ff mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764785492.0494292-44-31594611072637/.source.gu95f_ff _original_basename=.v49d1j44 follow=False checksum=918f786728b2575660b7464f5e6f5af8f992f15a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:34 compute-0 python3.9[252185]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:11:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:36 compute-0 python3.9[252337]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbjCJvulNQFoK9EDJkYMUpjmLdBbv8twOlkjrDwwZXYa6o4yvBrNIfIb2p0fEoMZ3HrJqI70KDITEDyHkMCLZqAu0qoue0ESJm+cxnuWsei974RYQSC2dZp3FAlkkh8Oe/2ShyNhNO4fZ436DKHDqAEgh6Bkfsk2rbZY/QAeeXXXePZzl9fpjUyRwOf5zf7+NTY1S6IQ8sPho08YY9ikbkKKxy8ioiyRSxsMIZFq7aM/jI++GFUMUVBkAWz0n9mywg2Z05glbO6YyrTLuEb9EBnFtwzYTbAIr9cZxyW7klLru4WvKK+gDPOOE/g0RW66n1JSCQ/HOG4uumVR7ivXMJu3+K/pqdXiq7MQuS8NCE/RRQagh9u793DJ12Q/tnyJENOsYmWzvEb0xUud56cPvxZG2uRremuHxuBADGFkiui9Hjb4Obw/9nMsNA/58q5wEX1YqYXilc35uhV2xfS9odHTReQGfFNOoAObFcXQAVrzbltLo7RBgnW7vwTOfXqdc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDga0dlcKFfXn6U1UHkHIyOLqdO6IBiPXa8xAuL28XMM#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGby1UAcqXsYegna+DufYvoZLrvWEcwQSpfRsN2Eer8IseIipIrVobBbBXr8E3TSR8/RubLA6TojHG2/nfFshtg=#012 create=True mode=0644 path=/tmp/ansible.gu95f_ff state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:37 compute-0 python3.9[252489]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.gu95f_ff' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:11:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:38 compute-0 python3.9[252643]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.gu95f_ff state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:38 compute-0 systemd-logind[784]: Session 48 logged out. Waiting for processes to exit.
Dec  3 18:11:38 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec  3 18:11:38 compute-0 systemd[1]: session-48.scope: Consumed 7.351s CPU time.
Dec  3 18:11:38 compute-0 systemd-logind[784]: Removed session 48.
Dec  3 18:11:39 compute-0 podman[252668]: 2025-12-03 18:11:39.958503287 +0000 UTC m=+0.124894697 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:11:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:43 compute-0 systemd-logind[784]: New session 49 of user zuul.
Dec  3 18:11:43 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec  3 18:11:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:11:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:11:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:11:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:11:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:11:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:11:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:44 compute-0 python3.9[252850]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:11:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:46 compute-0 python3.9[253006]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  3 18:11:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:47 compute-0 python3.9[253162]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 18:11:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:48 compute-0 python3.9[253315]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:11:49 compute-0 python3.9[253471]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:11:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:50 compute-0 python3.9[253623]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:11:51 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec  3 18:11:51 compute-0 systemd[1]: session-49.scope: Consumed 5.478s CPU time.
Dec  3 18:11:51 compute-0 systemd-logind[784]: Session 49 logged out. Waiting for processes to exit.
Dec  3 18:11:51 compute-0 systemd-logind[784]: Removed session 49.
Dec  3 18:11:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:56 compute-0 systemd-logind[784]: New session 50 of user zuul.
Dec  3 18:11:56 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec  3 18:11:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:11:57 compute-0 podman[253778]: 2025-12-03 18:11:57.790029183 +0000 UTC m=+0.112528478 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 18:11:57 compute-0 podman[253776]: 2025-12-03 18:11:57.805371383 +0000 UTC m=+0.123324248 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible)
Dec  3 18:11:57 compute-0 podman[253775]: 2025-12-03 18:11:57.80690395 +0000 UTC m=+0.133101785 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:11:57 compute-0 podman[253779]: 2025-12-03 18:11:57.810409106 +0000 UTC m=+0.100147890 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:11:57 compute-0 podman[253777]: 2025-12-03 18:11:57.829738531 +0000 UTC m=+0.143485155 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:11:57 compute-0 podman[253784]: 2025-12-03 18:11:57.834918587 +0000 UTC m=+0.134129550 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, name=ubi9, version=9.4, architecture=x86_64, release-0.7.12=, build-date=2024-09-18T21:23:30)
Dec  3 18:11:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:11:58 compute-0 python3.9[253872]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:11:59 compute-0 python3.9[254071]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 18:11:59 compute-0 podman[158200]: time="2025-12-03T18:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:11:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:11:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6828 "" "Go-http-client/1.1"
Dec  3 18:11:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:00 compute-0 python3.9[254155]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  3 18:12:01 compute-0 openstack_network_exporter[160319]: ERROR   18:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:12:01 compute-0 openstack_network_exporter[160319]: ERROR   18:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:12:01 compute-0 openstack_network_exporter[160319]: ERROR   18:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:12:01 compute-0 openstack_network_exporter[160319]: ERROR   18:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:12:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:12:01 compute-0 openstack_network_exporter[160319]: ERROR   18:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:12:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:12:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:12:02 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:12:02 compute-0 python3.9[254424]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:12:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:12:02 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:12:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:12:02 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:12:02 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 473a539e-9700-42be-9f91-22060de17927 does not exist
Dec  3 18:12:02 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev ee81a29b-3e51-4ab5-a103-17ee819795db does not exist
Dec  3 18:12:02 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 0648286b-c2ae-4ee8-81f8-de1b38cc716c does not exist
Dec  3 18:12:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:12:02 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:12:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:12:02 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:12:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:12:02 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:12:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:12:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:12:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:12:03 compute-0 podman[254639]: 2025-12-03 18:12:03.706142568 +0000 UTC m=+0.027964496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:12:03 compute-0 podman[254639]: 2025-12-03 18:12:03.885866759 +0000 UTC m=+0.207688677 container create 9bb6dae9a8aaf9326171883a58dda2e32cd8535411fdf5dcef29435043200d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:12:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:04 compute-0 systemd[1]: Started libpod-conmon-9bb6dae9a8aaf9326171883a58dda2e32cd8535411fdf5dcef29435043200d57.scope.
Dec  3 18:12:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:12:04 compute-0 podman[254639]: 2025-12-03 18:12:04.14932174 +0000 UTC m=+0.471143648 container init 9bb6dae9a8aaf9326171883a58dda2e32cd8535411fdf5dcef29435043200d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:12:04 compute-0 podman[254639]: 2025-12-03 18:12:04.159921576 +0000 UTC m=+0.481743484 container start 9bb6dae9a8aaf9326171883a58dda2e32cd8535411fdf5dcef29435043200d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:12:04 compute-0 ecstatic_edison[254717]: 167 167
Dec  3 18:12:04 compute-0 systemd[1]: libpod-9bb6dae9a8aaf9326171883a58dda2e32cd8535411fdf5dcef29435043200d57.scope: Deactivated successfully.
Dec  3 18:12:04 compute-0 python3.9[254749]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 18:12:04 compute-0 podman[254639]: 2025-12-03 18:12:04.690061597 +0000 UTC m=+1.011883535 container attach 9bb6dae9a8aaf9326171883a58dda2e32cd8535411fdf5dcef29435043200d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_edison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:12:04 compute-0 podman[254639]: 2025-12-03 18:12:04.690485227 +0000 UTC m=+1.012307135 container died 9bb6dae9a8aaf9326171883a58dda2e32cd8535411fdf5dcef29435043200d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_edison, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:12:05 compute-0 python3.9[254909]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c27fa5a56bdd683a9fbfccb55880ec5ce641bb95a8e8641b6e28d418b723463-merged.mount: Deactivated successfully.
Dec  3 18:12:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:06 compute-0 podman[254639]: 2025-12-03 18:12:06.396852841 +0000 UTC m=+2.718674749 container remove 9bb6dae9a8aaf9326171883a58dda2e32cd8535411fdf5dcef29435043200d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_edison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:12:06 compute-0 systemd[1]: libpod-conmon-9bb6dae9a8aaf9326171883a58dda2e32cd8535411fdf5dcef29435043200d57.scope: Deactivated successfully.
Dec  3 18:12:06 compute-0 python3.9[255059]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:12:06 compute-0 podman[255067]: 2025-12-03 18:12:06.611934875 +0000 UTC m=+0.041430972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:12:06 compute-0 podman[255067]: 2025-12-03 18:12:06.981116689 +0000 UTC m=+0.410612786 container create 46e9a94f5e3c037acca5800341b84d9e7acac8243a24ee3cf887741da98e7d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mcclintock, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:12:07 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec  3 18:12:07 compute-0 systemd[1]: session-50.scope: Consumed 8.117s CPU time.
Dec  3 18:12:07 compute-0 systemd-logind[784]: Session 50 logged out. Waiting for processes to exit.
Dec  3 18:12:07 compute-0 systemd-logind[784]: Removed session 50.
Dec  3 18:12:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:07 compute-0 systemd[1]: Started libpod-conmon-46e9a94f5e3c037acca5800341b84d9e7acac8243a24ee3cf887741da98e7d14.scope.
Dec  3 18:12:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f79861a6baef1dafd54d8f1275627c521fb897a5ae432f92d53bdd3b1343a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f79861a6baef1dafd54d8f1275627c521fb897a5ae432f92d53bdd3b1343a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f79861a6baef1dafd54d8f1275627c521fb897a5ae432f92d53bdd3b1343a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f79861a6baef1dafd54d8f1275627c521fb897a5ae432f92d53bdd3b1343a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f79861a6baef1dafd54d8f1275627c521fb897a5ae432f92d53bdd3b1343a5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:08 compute-0 podman[255067]: 2025-12-03 18:12:08.44888799 +0000 UTC m=+1.878384097 container init 46e9a94f5e3c037acca5800341b84d9e7acac8243a24ee3cf887741da98e7d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mcclintock, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:12:08 compute-0 podman[255067]: 2025-12-03 18:12:08.460285126 +0000 UTC m=+1.889781223 container start 46e9a94f5e3c037acca5800341b84d9e7acac8243a24ee3cf887741da98e7d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mcclintock, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.464736) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785528464837, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1702, "num_deletes": 251, "total_data_size": 2363426, "memory_usage": 2403864, "flush_reason": "Manual Compaction"}
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec  3 18:12:08 compute-0 podman[255067]: 2025-12-03 18:12:08.467867659 +0000 UTC m=+1.897363756 container attach 46e9a94f5e3c037acca5800341b84d9e7acac8243a24ee3cf887741da98e7d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785528487383, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1394175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7307, "largest_seqno": 9008, "table_properties": {"data_size": 1388515, "index_size": 2543, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16952, "raw_average_key_size": 21, "raw_value_size": 1374952, "raw_average_value_size": 1708, "num_data_blocks": 119, "num_entries": 805, "num_filter_entries": 805, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785373, "oldest_key_time": 1764785373, "file_creation_time": 1764785528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 22770 microseconds, and 8124 cpu microseconds.
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.487509) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1394175 bytes OK
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.487530) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.490122) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.490138) EVENT_LOG_v1 {"time_micros": 1764785528490133, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.490157) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2355773, prev total WAL file size 2355773, number of live WAL files 2.
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.491286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1361KB)], [20(6925KB)]
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785528491320, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8486053, "oldest_snapshot_seqno": -1}
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3385 keys, 6773579 bytes, temperature: kUnknown
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785528746791, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6773579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6747667, "index_size": 16331, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81017, "raw_average_key_size": 23, "raw_value_size": 6683223, "raw_average_value_size": 1974, "num_data_blocks": 725, "num_entries": 3385, "num_filter_entries": 3385, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764785528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.747794) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6773579 bytes
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.767759) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.2 rd, 26.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.8 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(10.9) write-amplify(4.9) OK, records in: 3828, records dropped: 443 output_compression: NoCompression
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.767796) EVENT_LOG_v1 {"time_micros": 1764785528767780, "job": 6, "event": "compaction_finished", "compaction_time_micros": 255569, "compaction_time_cpu_micros": 22390, "output_level": 6, "num_output_files": 1, "total_output_size": 6773579, "num_input_records": 3828, "num_output_records": 3385, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785528768291, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785528770060, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.491163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.770374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.770382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.770385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.770387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:12:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:12:08.770389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:12:09 compute-0 dreamy_mcclintock[255110]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:12:09 compute-0 dreamy_mcclintock[255110]: --> relative data size: 1.0
Dec  3 18:12:09 compute-0 dreamy_mcclintock[255110]: --> All data devices are unavailable
Dec  3 18:12:09 compute-0 systemd[1]: libpod-46e9a94f5e3c037acca5800341b84d9e7acac8243a24ee3cf887741da98e7d14.scope: Deactivated successfully.
Dec  3 18:12:09 compute-0 systemd[1]: libpod-46e9a94f5e3c037acca5800341b84d9e7acac8243a24ee3cf887741da98e7d14.scope: Consumed 1.121s CPU time.
Dec  3 18:12:09 compute-0 podman[255067]: 2025-12-03 18:12:09.648613 +0000 UTC m=+3.078109127 container died 46e9a94f5e3c037acca5800341b84d9e7acac8243a24ee3cf887741da98e7d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mcclintock, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-95f79861a6baef1dafd54d8f1275627c521fb897a5ae432f92d53bdd3b1343a5-merged.mount: Deactivated successfully.
Dec  3 18:12:09 compute-0 podman[255067]: 2025-12-03 18:12:09.739141336 +0000 UTC m=+3.168637403 container remove 46e9a94f5e3c037acca5800341b84d9e7acac8243a24ee3cf887741da98e7d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:12:09 compute-0 systemd[1]: libpod-conmon-46e9a94f5e3c037acca5800341b84d9e7acac8243a24ee3cf887741da98e7d14.scope: Deactivated successfully.
Dec  3 18:12:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:10 compute-0 podman[255201]: 2025-12-03 18:12:10.130594109 +0000 UTC m=+0.082198867 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:12:10 compute-0 podman[255311]: 2025-12-03 18:12:10.637171571 +0000 UTC m=+0.055639875 container create 1bf7b61c60d4655851560a556f0e2e5bd0088d890d1fc70a3f2b0d6dacafea6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_volhard, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:12:10 compute-0 systemd[1]: Started libpod-conmon-1bf7b61c60d4655851560a556f0e2e5bd0088d890d1fc70a3f2b0d6dacafea6e.scope.
Dec  3 18:12:10 compute-0 podman[255311]: 2025-12-03 18:12:10.616307157 +0000 UTC m=+0.034775491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:12:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:12:10 compute-0 podman[255311]: 2025-12-03 18:12:10.735954706 +0000 UTC m=+0.154423070 container init 1bf7b61c60d4655851560a556f0e2e5bd0088d890d1fc70a3f2b0d6dacafea6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_volhard, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:12:10 compute-0 podman[255311]: 2025-12-03 18:12:10.74688952 +0000 UTC m=+0.165357824 container start 1bf7b61c60d4655851560a556f0e2e5bd0088d890d1fc70a3f2b0d6dacafea6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_volhard, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:12:10 compute-0 podman[255311]: 2025-12-03 18:12:10.751290406 +0000 UTC m=+0.169758720 container attach 1bf7b61c60d4655851560a556f0e2e5bd0088d890d1fc70a3f2b0d6dacafea6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_volhard, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:12:10 compute-0 boring_volhard[255327]: 167 167
Dec  3 18:12:10 compute-0 systemd[1]: libpod-1bf7b61c60d4655851560a556f0e2e5bd0088d890d1fc70a3f2b0d6dacafea6e.scope: Deactivated successfully.
Dec  3 18:12:10 compute-0 podman[255311]: 2025-12-03 18:12:10.758203873 +0000 UTC m=+0.176672217 container died 1bf7b61c60d4655851560a556f0e2e5bd0088d890d1fc70a3f2b0d6dacafea6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 18:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1b1956174f0b80b0c96fcc121ead59ec110a1258fea32d3e8b06f1d3389ff24-merged.mount: Deactivated successfully.
Dec  3 18:12:10 compute-0 podman[255311]: 2025-12-03 18:12:10.814667726 +0000 UTC m=+0.233136030 container remove 1bf7b61c60d4655851560a556f0e2e5bd0088d890d1fc70a3f2b0d6dacafea6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:12:10 compute-0 systemd[1]: libpod-conmon-1bf7b61c60d4655851560a556f0e2e5bd0088d890d1fc70a3f2b0d6dacafea6e.scope: Deactivated successfully.
Dec  3 18:12:11 compute-0 podman[255352]: 2025-12-03 18:12:11.011320255 +0000 UTC m=+0.056929076 container create b01e4ae8d28917811cb6da561cd32e5c19263a55f1882ceec160cdc6ed8e7820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:12:11 compute-0 systemd[1]: Started libpod-conmon-b01e4ae8d28917811cb6da561cd32e5c19263a55f1882ceec160cdc6ed8e7820.scope.
Dec  3 18:12:11 compute-0 podman[255352]: 2025-12-03 18:12:10.995411311 +0000 UTC m=+0.041020152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:12:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7184539e6f0f5d9c71c5e06b98fe5904068c26288429aef4221d88eed4d5f900/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7184539e6f0f5d9c71c5e06b98fe5904068c26288429aef4221d88eed4d5f900/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7184539e6f0f5d9c71c5e06b98fe5904068c26288429aef4221d88eed4d5f900/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7184539e6f0f5d9c71c5e06b98fe5904068c26288429aef4221d88eed4d5f900/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:11 compute-0 podman[255352]: 2025-12-03 18:12:11.129103549 +0000 UTC m=+0.174712390 container init b01e4ae8d28917811cb6da561cd32e5c19263a55f1882ceec160cdc6ed8e7820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 18:12:11 compute-0 podman[255352]: 2025-12-03 18:12:11.137249586 +0000 UTC m=+0.182858407 container start b01e4ae8d28917811cb6da561cd32e5c19263a55f1882ceec160cdc6ed8e7820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 18:12:11 compute-0 podman[255352]: 2025-12-03 18:12:11.141373726 +0000 UTC m=+0.186982567 container attach b01e4ae8d28917811cb6da561cd32e5c19263a55f1882ceec160cdc6ed8e7820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_black, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:12:11 compute-0 systemd-logind[784]: New session 51 of user zuul.
Dec  3 18:12:11 compute-0 systemd[1]: Started Session 51 of User zuul.
Dec  3 18:12:11 compute-0 festive_black[255368]: {
Dec  3 18:12:11 compute-0 festive_black[255368]:    "0": [
Dec  3 18:12:11 compute-0 festive_black[255368]:        {
Dec  3 18:12:11 compute-0 festive_black[255368]:            "devices": [
Dec  3 18:12:11 compute-0 festive_black[255368]:                "/dev/loop3"
Dec  3 18:12:11 compute-0 festive_black[255368]:            ],
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_name": "ceph_lv0",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_size": "21470642176",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "name": "ceph_lv0",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "tags": {
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.cluster_name": "ceph",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.crush_device_class": "",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.encrypted": "0",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.osd_id": "0",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.type": "block",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.vdo": "0"
Dec  3 18:12:11 compute-0 festive_black[255368]:            },
Dec  3 18:12:11 compute-0 festive_black[255368]:            "type": "block",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "vg_name": "ceph_vg0"
Dec  3 18:12:11 compute-0 festive_black[255368]:        }
Dec  3 18:12:11 compute-0 festive_black[255368]:    ],
Dec  3 18:12:11 compute-0 festive_black[255368]:    "1": [
Dec  3 18:12:11 compute-0 festive_black[255368]:        {
Dec  3 18:12:11 compute-0 festive_black[255368]:            "devices": [
Dec  3 18:12:11 compute-0 festive_black[255368]:                "/dev/loop4"
Dec  3 18:12:11 compute-0 festive_black[255368]:            ],
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_name": "ceph_lv1",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_size": "21470642176",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "name": "ceph_lv1",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "tags": {
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.cluster_name": "ceph",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.crush_device_class": "",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.encrypted": "0",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.osd_id": "1",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.type": "block",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.vdo": "0"
Dec  3 18:12:11 compute-0 festive_black[255368]:            },
Dec  3 18:12:11 compute-0 festive_black[255368]:            "type": "block",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "vg_name": "ceph_vg1"
Dec  3 18:12:11 compute-0 festive_black[255368]:        }
Dec  3 18:12:11 compute-0 festive_black[255368]:    ],
Dec  3 18:12:11 compute-0 festive_black[255368]:    "2": [
Dec  3 18:12:11 compute-0 festive_black[255368]:        {
Dec  3 18:12:11 compute-0 festive_black[255368]:            "devices": [
Dec  3 18:12:11 compute-0 festive_black[255368]:                "/dev/loop5"
Dec  3 18:12:11 compute-0 festive_black[255368]:            ],
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_name": "ceph_lv2",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_size": "21470642176",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "name": "ceph_lv2",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "tags": {
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.cluster_name": "ceph",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.crush_device_class": "",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.encrypted": "0",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.osd_id": "2",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.type": "block",
Dec  3 18:12:11 compute-0 festive_black[255368]:                "ceph.vdo": "0"
Dec  3 18:12:11 compute-0 festive_black[255368]:            },
Dec  3 18:12:11 compute-0 festive_black[255368]:            "type": "block",
Dec  3 18:12:11 compute-0 festive_black[255368]:            "vg_name": "ceph_vg2"
Dec  3 18:12:11 compute-0 festive_black[255368]:        }
Dec  3 18:12:11 compute-0 festive_black[255368]:    ]
Dec  3 18:12:11 compute-0 festive_black[255368]: }
Dec  3 18:12:12 compute-0 systemd[1]: libpod-b01e4ae8d28917811cb6da561cd32e5c19263a55f1882ceec160cdc6ed8e7820.scope: Deactivated successfully.
Dec  3 18:12:12 compute-0 podman[255352]: 2025-12-03 18:12:12.010780769 +0000 UTC m=+1.056389590 container died b01e4ae8d28917811cb6da561cd32e5c19263a55f1882ceec160cdc6ed8e7820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_black, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:12:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-7184539e6f0f5d9c71c5e06b98fe5904068c26288429aef4221d88eed4d5f900-merged.mount: Deactivated successfully.
Dec  3 18:12:12 compute-0 podman[255352]: 2025-12-03 18:12:12.090982176 +0000 UTC m=+1.136591007 container remove b01e4ae8d28917811cb6da561cd32e5c19263a55f1882ceec160cdc6ed8e7820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_black, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:12:12 compute-0 systemd[1]: libpod-conmon-b01e4ae8d28917811cb6da561cd32e5c19263a55f1882ceec160cdc6ed8e7820.scope: Deactivated successfully.
Dec  3 18:12:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:12 compute-0 podman[255655]: 2025-12-03 18:12:12.9496591 +0000 UTC m=+0.072441150 container create 6fa8fe097a6dab0c705a9d988f26e282603cc636d7eff930927d2296a44c7ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:12:13 compute-0 podman[255655]: 2025-12-03 18:12:12.913668401 +0000 UTC m=+0.036450491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:12:13 compute-0 systemd[1]: Started libpod-conmon-6fa8fe097a6dab0c705a9d988f26e282603cc636d7eff930927d2296a44c7ba0.scope.
Dec  3 18:12:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:12:13 compute-0 podman[255655]: 2025-12-03 18:12:13.076344969 +0000 UTC m=+0.199127049 container init 6fa8fe097a6dab0c705a9d988f26e282603cc636d7eff930927d2296a44c7ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:12:13 compute-0 podman[255655]: 2025-12-03 18:12:13.093656737 +0000 UTC m=+0.216438767 container start 6fa8fe097a6dab0c705a9d988f26e282603cc636d7eff930927d2296a44c7ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:12:13 compute-0 podman[255655]: 2025-12-03 18:12:13.099273323 +0000 UTC m=+0.222055353 container attach 6fa8fe097a6dab0c705a9d988f26e282603cc636d7eff930927d2296a44c7ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 18:12:13 compute-0 serene_napier[255698]: 167 167
Dec  3 18:12:13 compute-0 systemd[1]: libpod-6fa8fe097a6dab0c705a9d988f26e282603cc636d7eff930927d2296a44c7ba0.scope: Deactivated successfully.
Dec  3 18:12:13 compute-0 podman[255655]: 2025-12-03 18:12:13.102665565 +0000 UTC m=+0.225447585 container died 6fa8fe097a6dab0c705a9d988f26e282603cc636d7eff930927d2296a44c7ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:12:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-54e031aee70d3c7ce6abc2dd2cff3b944a9a8e8af33adbc57c22f32ef36ecfb5-merged.mount: Deactivated successfully.
Dec  3 18:12:13 compute-0 podman[255655]: 2025-12-03 18:12:13.1567417 +0000 UTC m=+0.279523730 container remove 6fa8fe097a6dab0c705a9d988f26e282603cc636d7eff930927d2296a44c7ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:12:13 compute-0 systemd[1]: libpod-conmon-6fa8fe097a6dab0c705a9d988f26e282603cc636d7eff930927d2296a44c7ba0.scope: Deactivated successfully.
Dec  3 18:12:13 compute-0 python3.9[255695]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:12:13 compute-0 podman[255725]: 2025-12-03 18:12:13.375711668 +0000 UTC m=+0.064277753 container create 6bc06b50228811f061acd47d561dec076afdf69958d67bcf7cb33db04a76b0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dirac, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:12:13 compute-0 systemd[1]: Started libpod-conmon-6bc06b50228811f061acd47d561dec076afdf69958d67bcf7cb33db04a76b0d4.scope.
Dec  3 18:12:13 compute-0 podman[255725]: 2025-12-03 18:12:13.347053016 +0000 UTC m=+0.035619101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:12:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b152d71c0fbd204f21aa9cc4dab636622eeba8f130f986f082e806dc44a51e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b152d71c0fbd204f21aa9cc4dab636622eeba8f130f986f082e806dc44a51e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b152d71c0fbd204f21aa9cc4dab636622eeba8f130f986f082e806dc44a51e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b152d71c0fbd204f21aa9cc4dab636622eeba8f130f986f082e806dc44a51e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:12:13 compute-0 podman[255725]: 2025-12-03 18:12:13.51162036 +0000 UTC m=+0.200186495 container init 6bc06b50228811f061acd47d561dec076afdf69958d67bcf7cb33db04a76b0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 18:12:13 compute-0 podman[255725]: 2025-12-03 18:12:13.529111532 +0000 UTC m=+0.217677627 container start 6bc06b50228811f061acd47d561dec076afdf69958d67bcf7cb33db04a76b0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 18:12:13 compute-0 podman[255725]: 2025-12-03 18:12:13.539729669 +0000 UTC m=+0.228295814 container attach 6bc06b50228811f061acd47d561dec076afdf69958d67bcf7cb33db04a76b0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:12:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:12:13
Dec  3 18:12:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:12:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:12:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', '.rgw.root', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.meta']
Dec  3 18:12:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:12:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:12:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:12:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:12:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:12:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:12:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:14 compute-0 goofy_dirac[255742]: {
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "osd_id": 1,
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "type": "bluestore"
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:    },
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "osd_id": 2,
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "type": "bluestore"
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:    },
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "osd_id": 0,
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:        "type": "bluestore"
Dec  3 18:12:14 compute-0 goofy_dirac[255742]:    }
Dec  3 18:12:14 compute-0 goofy_dirac[255742]: }
Dec  3 18:12:14 compute-0 systemd[1]: libpod-6bc06b50228811f061acd47d561dec076afdf69958d67bcf7cb33db04a76b0d4.scope: Deactivated successfully.
Dec  3 18:12:14 compute-0 conmon[255742]: conmon 6bc06b50228811f061ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bc06b50228811f061acd47d561dec076afdf69958d67bcf7cb33db04a76b0d4.scope/container/memory.events
Dec  3 18:12:14 compute-0 systemd[1]: libpod-6bc06b50228811f061acd47d561dec076afdf69958d67bcf7cb33db04a76b0d4.scope: Consumed 1.145s CPU time.
Dec  3 18:12:14 compute-0 podman[255725]: 2025-12-03 18:12:14.689094522 +0000 UTC m=+1.377660607 container died 6bc06b50228811f061acd47d561dec076afdf69958d67bcf7cb33db04a76b0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dirac, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:12:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2b152d71c0fbd204f21aa9cc4dab636622eeba8f130f986f082e806dc44a51e-merged.mount: Deactivated successfully.
Dec  3 18:12:14 compute-0 podman[255725]: 2025-12-03 18:12:14.765480486 +0000 UTC m=+1.454046571 container remove 6bc06b50228811f061acd47d561dec076afdf69958d67bcf7cb33db04a76b0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dirac, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:12:14 compute-0 systemd[1]: libpod-conmon-6bc06b50228811f061acd47d561dec076afdf69958d67bcf7cb33db04a76b0d4.scope: Deactivated successfully.
Dec  3 18:12:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:12:14 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:12:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:12:14 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 2be3bc55-2f6d-4c95-bf3c-5392c81e9547 does not exist
Dec  3 18:12:14 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 31f58740-0818-41bc-a43b-3fe836f586fa does not exist
Dec  3 18:12:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:12:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:12:15 compute-0 python3.9[255986]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:16 compute-0 python3.9[256138]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:17 compute-0 python3.9[256290]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:18 compute-0 python3.9[256368]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:18 compute-0 python3.9[256520]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:19 compute-0 python3.9[256598]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:20 compute-0 python3.9[256751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:21 compute-0 python3.9[256829]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:22 compute-0 python3.9[256981]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:23 compute-0 python3.9[257133]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:12:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:12:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:24 compute-0 python3.9[257285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:24 compute-0 python3.9[257363]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:12:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2027 writes, 9009 keys, 2027 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2026 writes, 2026 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2027 writes, 9009 keys, 2027 commit groups, 1.0 writes per commit group, ingest: 10.86 MB, 0.02 MB/s#012Interval WAL: 2026 writes, 2026 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     82.3      0.10              0.04         3    0.033       0      0       0.0       0.0#012  L6      1/0    6.46 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     46.9     41.7      0.32              0.04         2    0.159    7141    733       0.0       0.0#012 Sum      1/0    6.46 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     35.8     51.3      0.42              0.08         5    0.083    7141    733       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     36.5     52.1      0.41              0.08         4    0.102    7141    733       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     46.9     41.7      0.32              0.04         2    0.159    7141    733       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     88.5      0.09              0.04         2    0.046       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.4 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55911062f1f0#2 capacity: 308.00 MB usage: 485.36 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(35,398.39 KB,0.126316%) FilterBlock(6,27.80 KB,0.00881344%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 18:12:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:27 compute-0 python3.9[257515]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:27 compute-0 podman[257522]: 2025-12-03 18:12:27.961401229 +0000 UTC m=+0.090219720 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:12:27 compute-0 podman[257519]: 2025-12-03 18:12:27.970016547 +0000 UTC m=+0.111426022 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:12:27 compute-0 podman[257518]: 2025-12-03 18:12:27.974862994 +0000 UTC m=+0.127970991 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125)
Dec  3 18:12:27 compute-0 podman[257542]: 2025-12-03 18:12:27.975817487 +0000 UTC m=+0.093531150 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, architecture=x86_64, config_id=edpm, name=ubi9, release-0.7.12=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 18:12:27 compute-0 podman[257537]: 2025-12-03 18:12:27.993998586 +0000 UTC m=+0.122006657 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec  3 18:12:28 compute-0 podman[257520]: 2025-12-03 18:12:28.003354102 +0000 UTC m=+0.154213184 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=)
Dec  3 18:12:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:28 compute-0 python3.9[257711]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:29 compute-0 python3.9[257863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:29 compute-0 podman[158200]: time="2025-12-03T18:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:12:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:12:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6829 "" "Go-http-client/1.1"
Dec  3 18:12:29 compute-0 python3.9[257941]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:30 compute-0 python3.9[258093]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:31 compute-0 openstack_network_exporter[160319]: ERROR   18:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:12:31 compute-0 openstack_network_exporter[160319]: ERROR   18:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:12:31 compute-0 openstack_network_exporter[160319]: ERROR   18:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:12:31 compute-0 openstack_network_exporter[160319]: ERROR   18:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:12:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:12:31 compute-0 openstack_network_exporter[160319]: ERROR   18:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:12:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:12:31 compute-0 python3.9[258245]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:32 compute-0 python3.9[258397]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:33 compute-0 python3.9[258520]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764785551.9015677-165-184373223920822/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=500a5ae284c164b4f6c3476b2b19345254a4862f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:34 compute-0 python3.9[258672]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:35 compute-0 python3.9[258795]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764785553.7847931-165-187859413633225/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=51aae6c674a856b81951449221fef5c7d3abbd8c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:36 compute-0 python3.9[258947]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:36 compute-0 python3.9[259070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764785555.4730651-165-43253333299547/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f4444dbad40381d7b43450b91e0cb7b748f13f3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:38 compute-0 python3.9[259222]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:38 compute-0 python3.9[259374]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:39 compute-0 python3.9[259526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:40 compute-0 podman[259604]: 2025-12-03 18:12:40.403963397 +0000 UTC m=+0.147124035 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:12:40 compute-0 python3.9[259605]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:41 compute-0 python3.9[259780]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:41 compute-0 python3.9[259858]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:42 compute-0 python3.9[260010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:43 compute-0 python3.9[260088]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:12:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:12:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:12:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:12:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:12:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:12:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:44 compute-0 python3.9[260240]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:45 compute-0 python3.9[260392]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:46 compute-0 python3.9[260544]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:46 compute-0 python3.9[260622]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:47 compute-0 python3.9[260774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:48 compute-0 python3.9[260852]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:49 compute-0 python3.9[261004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:50 compute-0 python3.9[261083]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:51 compute-0 python3.9[261235]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:52 compute-0 python3.9[261387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:53 compute-0 python3.9[261465]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:54 compute-0 python3.9[261617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:12:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:12:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:12:58 compute-0 python3.9[261769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:12:58 compute-0 podman[261823]: 2025-12-03 18:12:58.883592901 +0000 UTC m=+0.091385777 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:12:58 compute-0 podman[261824]: 2025-12-03 18:12:58.895394826 +0000 UTC m=+0.107366943 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release-0.7.12=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.expose-services=, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Dec  3 18:12:58 compute-0 podman[261819]: 2025-12-03 18:12:58.899054125 +0000 UTC m=+0.117293844 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:12:58 compute-0 podman[261822]: 2025-12-03 18:12:58.908021751 +0000 UTC m=+0.122033538 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  3 18:12:58 compute-0 podman[261820]: 2025-12-03 18:12:58.915184105 +0000 UTC m=+0.134036148 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, container_name=openstack_network_exporter)
Dec  3 18:12:58 compute-0 podman[261821]: 2025-12-03 18:12:58.949660617 +0000 UTC m=+0.170917238 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 18:12:59 compute-0 python3.9[261927]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:12:59 compute-0 podman[158200]: time="2025-12-03T18:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:12:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:12:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6837 "" "Go-http-client/1.1"
Dec  3 18:13:00 compute-0 python3.9[262117]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:13:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:00 compute-0 python3.9[262269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:01 compute-0 openstack_network_exporter[160319]: ERROR   18:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:13:01 compute-0 openstack_network_exporter[160319]: ERROR   18:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:13:01 compute-0 openstack_network_exporter[160319]: ERROR   18:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:13:01 compute-0 openstack_network_exporter[160319]: ERROR   18:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:13:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:13:01 compute-0 openstack_network_exporter[160319]: ERROR   18:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:13:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:13:01 compute-0 python3.9[262392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764785580.2681787-375-54739718603259/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad59a59e9b9a4b3219c10debf9b016d113b8fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:02 compute-0 python3.9[262544]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:13:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:03 compute-0 python3.9[262696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.700 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.701 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:13:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:13:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:04 compute-0 python3.9[262775]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:05 compute-0 python3.9[262927]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:13:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:06 compute-0 python3.9[263079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:06 compute-0 python3.9[263157]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:07 compute-0 python3.9[263309]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:13:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:08 compute-0 python3.9[263461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:09 compute-0 python3.9[263584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764785588.10337-441-200900198434423/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ad59a59e9b9a4b3219c10debf9b016d113b8fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.588182) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785589588269, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 695, "num_deletes": 251, "total_data_size": 892561, "memory_usage": 906792, "flush_reason": "Manual Compaction"}
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785589600768, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 884927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9009, "largest_seqno": 9703, "table_properties": {"data_size": 881282, "index_size": 1489, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7735, "raw_average_key_size": 18, "raw_value_size": 874058, "raw_average_value_size": 2076, "num_data_blocks": 69, "num_entries": 421, "num_filter_entries": 421, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785528, "oldest_key_time": 1764785528, "file_creation_time": 1764785589, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12672 microseconds, and 7472 cpu microseconds.
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.600854) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 884927 bytes OK
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.600884) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.603914) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.603945) EVENT_LOG_v1 {"time_micros": 1764785589603935, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.603977) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 888957, prev total WAL file size 888957, number of live WAL files 2.
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.605997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(864KB)], [23(6614KB)]
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785589606117, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7658506, "oldest_snapshot_seqno": -1}
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3293 keys, 6125599 bytes, temperature: kUnknown
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785589662524, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6125599, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6101546, "index_size": 14728, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79878, "raw_average_key_size": 24, "raw_value_size": 6039899, "raw_average_value_size": 1834, "num_data_blocks": 643, "num_entries": 3293, "num_filter_entries": 3293, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764785589, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.662957) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6125599 bytes
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.665555) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.2 rd, 108.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.5 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(15.6) write-amplify(6.9) OK, records in: 3806, records dropped: 513 output_compression: NoCompression
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.665576) EVENT_LOG_v1 {"time_micros": 1764785589665566, "job": 8, "event": "compaction_finished", "compaction_time_micros": 56662, "compaction_time_cpu_micros": 31426, "output_level": 6, "num_output_files": 1, "total_output_size": 6125599, "num_input_records": 3806, "num_output_records": 3293, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785589665863, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785589667335, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.605184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.667713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.667720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.667723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.667727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:13:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:13:09.667730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:13:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:10 compute-0 python3.9[263736]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:13:10 compute-0 podman[263784]: 2025-12-03 18:13:10.914866878 +0000 UTC m=+0.078475605 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:13:11 compute-0 python3.9[263912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:11 compute-0 python3.9[263990]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:12 compute-0 python3.9[264142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:13:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:13 compute-0 python3.9[264294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:13:13
Dec  3 18:13:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:13:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:13:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'volumes', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.data']
Dec  3 18:13:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:13:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:13:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:13:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:13:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:13:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:13:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:13:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:14 compute-0 python3.9[264372]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:14 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Dec  3 18:13:14 compute-0 systemd[1]: session-51.scope: Consumed 49.080s CPU time.
Dec  3 18:13:14 compute-0 systemd-logind[784]: Session 51 logged out. Waiting for processes to exit.
Dec  3 18:13:14 compute-0 systemd-logind[784]: Removed session 51.
Dec  3 18:13:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:13:15 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:13:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:13:16 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:13:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:13:16 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:13:16 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 1c5926d2-2a15-4d90-bf56-92468cd10b6c does not exist
Dec  3 18:13:16 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6cc5b7be-61b8-4091-b07a-d9d3c719927e does not exist
Dec  3 18:13:16 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 68de7556-a06a-4d69-82ff-e4881307c1a3 does not exist
Dec  3 18:13:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:13:16 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:13:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:13:16 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:13:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:13:16 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:13:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:13:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:13:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:13:16 compute-0 podman[264666]: 2025-12-03 18:13:16.81927296 +0000 UTC m=+0.068694299 container create 3174157e6a1ccf713c0e8314936cd2b81864609cc73c517ac8c693c0a5a7676b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:13:16 compute-0 systemd[1]: Started libpod-conmon-3174157e6a1ccf713c0e8314936cd2b81864609cc73c517ac8c693c0a5a7676b.scope.
Dec  3 18:13:16 compute-0 podman[264666]: 2025-12-03 18:13:16.796679675 +0000 UTC m=+0.046101004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:13:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:13:16 compute-0 podman[264666]: 2025-12-03 18:13:16.945927228 +0000 UTC m=+0.195348627 container init 3174157e6a1ccf713c0e8314936cd2b81864609cc73c517ac8c693c0a5a7676b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:13:16 compute-0 podman[264666]: 2025-12-03 18:13:16.957831086 +0000 UTC m=+0.207252435 container start 3174157e6a1ccf713c0e8314936cd2b81864609cc73c517ac8c693c0a5a7676b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:13:16 compute-0 podman[264666]: 2025-12-03 18:13:16.96426075 +0000 UTC m=+0.213682109 container attach 3174157e6a1ccf713c0e8314936cd2b81864609cc73c517ac8c693c0a5a7676b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:13:16 compute-0 sweet_rosalind[264681]: 167 167
Dec  3 18:13:16 compute-0 systemd[1]: libpod-3174157e6a1ccf713c0e8314936cd2b81864609cc73c517ac8c693c0a5a7676b.scope: Deactivated successfully.
Dec  3 18:13:17 compute-0 podman[264686]: 2025-12-03 18:13:17.03133684 +0000 UTC m=+0.039649748 container died 3174157e6a1ccf713c0e8314936cd2b81864609cc73c517ac8c693c0a5a7676b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-70e730426c2551d4ac90cff6ffe656780b2c6c01af97d3aee627e4139b01705c-merged.mount: Deactivated successfully.
Dec  3 18:13:17 compute-0 podman[264686]: 2025-12-03 18:13:17.12283649 +0000 UTC m=+0.131149358 container remove 3174157e6a1ccf713c0e8314936cd2b81864609cc73c517ac8c693c0a5a7676b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 18:13:17 compute-0 systemd[1]: libpod-conmon-3174157e6a1ccf713c0e8314936cd2b81864609cc73c517ac8c693c0a5a7676b.scope: Deactivated successfully.
Dec  3 18:13:17 compute-0 podman[264706]: 2025-12-03 18:13:17.353363856 +0000 UTC m=+0.071133288 container create d6296b442eafc8b2607462412127fe71e66579d023758288921dcaeb860fb1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_babbage, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:13:17 compute-0 podman[264706]: 2025-12-03 18:13:17.327777489 +0000 UTC m=+0.045546981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:13:17 compute-0 systemd[1]: Started libpod-conmon-d6296b442eafc8b2607462412127fe71e66579d023758288921dcaeb860fb1a2.scope.
Dec  3 18:13:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b344df9caba9d4b052fd58276d055a57bda714b3a4e65e9f776d1f066705ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b344df9caba9d4b052fd58276d055a57bda714b3a4e65e9f776d1f066705ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b344df9caba9d4b052fd58276d055a57bda714b3a4e65e9f776d1f066705ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b344df9caba9d4b052fd58276d055a57bda714b3a4e65e9f776d1f066705ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b344df9caba9d4b052fd58276d055a57bda714b3a4e65e9f776d1f066705ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:17 compute-0 podman[264706]: 2025-12-03 18:13:17.480823624 +0000 UTC m=+0.198593076 container init d6296b442eafc8b2607462412127fe71e66579d023758288921dcaeb860fb1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 18:13:17 compute-0 podman[264706]: 2025-12-03 18:13:17.495636652 +0000 UTC m=+0.213406104 container start d6296b442eafc8b2607462412127fe71e66579d023758288921dcaeb860fb1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:13:17 compute-0 podman[264706]: 2025-12-03 18:13:17.501144675 +0000 UTC m=+0.218914127 container attach d6296b442eafc8b2607462412127fe71e66579d023758288921dcaeb860fb1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:13:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:18 compute-0 practical_babbage[264721]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:13:18 compute-0 practical_babbage[264721]: --> relative data size: 1.0
Dec  3 18:13:18 compute-0 practical_babbage[264721]: --> All data devices are unavailable
Dec  3 18:13:18 compute-0 systemd[1]: libpod-d6296b442eafc8b2607462412127fe71e66579d023758288921dcaeb860fb1a2.scope: Deactivated successfully.
Dec  3 18:13:18 compute-0 systemd[1]: libpod-d6296b442eafc8b2607462412127fe71e66579d023758288921dcaeb860fb1a2.scope: Consumed 1.165s CPU time.
Dec  3 18:13:18 compute-0 podman[264750]: 2025-12-03 18:13:18.815146114 +0000 UTC m=+0.071002745 container died d6296b442eafc8b2607462412127fe71e66579d023758288921dcaeb860fb1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_babbage, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6b344df9caba9d4b052fd58276d055a57bda714b3a4e65e9f776d1f066705ab-merged.mount: Deactivated successfully.
Dec  3 18:13:18 compute-0 podman[264750]: 2025-12-03 18:13:18.989109375 +0000 UTC m=+0.244965996 container remove d6296b442eafc8b2607462412127fe71e66579d023758288921dcaeb860fb1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_babbage, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:13:19 compute-0 systemd[1]: libpod-conmon-d6296b442eafc8b2607462412127fe71e66579d023758288921dcaeb860fb1a2.scope: Deactivated successfully.
Dec  3 18:13:19 compute-0 podman[264906]: 2025-12-03 18:13:19.96007079 +0000 UTC m=+0.069855548 container create 932ad102d7db6e34dfc9bf58c3796bd0d33c9bcc3597d860e181d1ca06720974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_darwin, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:13:20 compute-0 systemd[1]: Started libpod-conmon-932ad102d7db6e34dfc9bf58c3796bd0d33c9bcc3597d860e181d1ca06720974.scope.
Dec  3 18:13:20 compute-0 podman[264906]: 2025-12-03 18:13:19.938109159 +0000 UTC m=+0.047893947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:13:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:13:20 compute-0 podman[264906]: 2025-12-03 18:13:20.102360906 +0000 UTC m=+0.212145674 container init 932ad102d7db6e34dfc9bf58c3796bd0d33c9bcc3597d860e181d1ca06720974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:13:20 compute-0 podman[264906]: 2025-12-03 18:13:20.112428169 +0000 UTC m=+0.222212967 container start 932ad102d7db6e34dfc9bf58c3796bd0d33c9bcc3597d860e181d1ca06720974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:13:20 compute-0 podman[264906]: 2025-12-03 18:13:20.119316056 +0000 UTC m=+0.229100834 container attach 932ad102d7db6e34dfc9bf58c3796bd0d33c9bcc3597d860e181d1ca06720974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:13:20 compute-0 systemd-logind[784]: New session 52 of user zuul.
Dec  3 18:13:20 compute-0 naughty_darwin[264924]: 167 167
Dec  3 18:13:20 compute-0 podman[264906]: 2025-12-03 18:13:20.125670888 +0000 UTC m=+0.235455646 container died 932ad102d7db6e34dfc9bf58c3796bd0d33c9bcc3597d860e181d1ca06720974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:13:20 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec  3 18:13:20 compute-0 systemd[1]: libpod-932ad102d7db6e34dfc9bf58c3796bd0d33c9bcc3597d860e181d1ca06720974.scope: Deactivated successfully.
Dec  3 18:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c5068e467139ca4517dcfbb4c2299316641c4978d71c1dd711e5214be486b6b-merged.mount: Deactivated successfully.
Dec  3 18:13:20 compute-0 podman[264906]: 2025-12-03 18:13:20.182049021 +0000 UTC m=+0.291833779 container remove 932ad102d7db6e34dfc9bf58c3796bd0d33c9bcc3597d860e181d1ca06720974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:13:20 compute-0 systemd[1]: libpod-conmon-932ad102d7db6e34dfc9bf58c3796bd0d33c9bcc3597d860e181d1ca06720974.scope: Deactivated successfully.
Dec  3 18:13:20 compute-0 podman[264973]: 2025-12-03 18:13:20.395151686 +0000 UTC m=+0.072175774 container create b6aec5ebe274c252aaa6a771cc15b670a4565f8155edf9e4c3eca41480f83869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:13:20 compute-0 systemd[1]: Started libpod-conmon-b6aec5ebe274c252aaa6a771cc15b670a4565f8155edf9e4c3eca41480f83869.scope.
Dec  3 18:13:20 compute-0 podman[264973]: 2025-12-03 18:13:20.369806374 +0000 UTC m=+0.046830512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:13:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1bf2474406fe931da960e6048c190e656c23f473945236672f7f22cf95ef95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1bf2474406fe931da960e6048c190e656c23f473945236672f7f22cf95ef95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1bf2474406fe931da960e6048c190e656c23f473945236672f7f22cf95ef95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1bf2474406fe931da960e6048c190e656c23f473945236672f7f22cf95ef95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:20 compute-0 podman[264973]: 2025-12-03 18:13:20.574852555 +0000 UTC m=+0.251876733 container init b6aec5ebe274c252aaa6a771cc15b670a4565f8155edf9e4c3eca41480f83869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:13:20 compute-0 podman[264973]: 2025-12-03 18:13:20.593234859 +0000 UTC m=+0.270258977 container start b6aec5ebe274c252aaa6a771cc15b670a4565f8155edf9e4c3eca41480f83869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:13:20 compute-0 podman[264973]: 2025-12-03 18:13:20.60116744 +0000 UTC m=+0.278191608 container attach b6aec5ebe274c252aaa6a771cc15b670a4565f8155edf9e4c3eca41480f83869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 18:13:21 compute-0 python3.9[265122]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]: {
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:    "0": [
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:        {
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "devices": [
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "/dev/loop3"
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            ],
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_name": "ceph_lv0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_size": "21470642176",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "name": "ceph_lv0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "tags": {
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.cluster_name": "ceph",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.crush_device_class": "",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.encrypted": "0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.osd_id": "0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.type": "block",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.vdo": "0"
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            },
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "type": "block",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "vg_name": "ceph_vg0"
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:        }
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:    ],
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:    "1": [
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:        {
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "devices": [
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "/dev/loop4"
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            ],
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_name": "ceph_lv1",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_size": "21470642176",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "name": "ceph_lv1",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "tags": {
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.cluster_name": "ceph",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.crush_device_class": "",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.encrypted": "0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.osd_id": "1",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.type": "block",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.vdo": "0"
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            },
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "type": "block",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "vg_name": "ceph_vg1"
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:        }
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:    ],
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:    "2": [
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:        {
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "devices": [
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "/dev/loop5"
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            ],
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_name": "ceph_lv2",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_size": "21470642176",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "name": "ceph_lv2",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "tags": {
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.cluster_name": "ceph",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.crush_device_class": "",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.encrypted": "0",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.osd_id": "2",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.type": "block",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:                "ceph.vdo": "0"
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            },
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "type": "block",
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:            "vg_name": "ceph_vg2"
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:        }
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]:    ]
Dec  3 18:13:21 compute-0 cranky_ishizaka[265018]: }
Dec  3 18:13:21 compute-0 systemd[1]: libpod-b6aec5ebe274c252aaa6a771cc15b670a4565f8155edf9e4c3eca41480f83869.scope: Deactivated successfully.
Dec  3 18:13:21 compute-0 podman[264973]: 2025-12-03 18:13:21.489506422 +0000 UTC m=+1.166530520 container died b6aec5ebe274c252aaa6a771cc15b670a4565f8155edf9e4c3eca41480f83869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 18:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e1bf2474406fe931da960e6048c190e656c23f473945236672f7f22cf95ef95-merged.mount: Deactivated successfully.
Dec  3 18:13:21 compute-0 podman[264973]: 2025-12-03 18:13:21.585874332 +0000 UTC m=+1.262898430 container remove b6aec5ebe274c252aaa6a771cc15b670a4565f8155edf9e4c3eca41480f83869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:13:21 compute-0 systemd[1]: libpod-conmon-b6aec5ebe274c252aaa6a771cc15b670a4565f8155edf9e4c3eca41480f83869.scope: Deactivated successfully.
Dec  3 18:13:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:22 compute-0 python3.9[265400]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:22 compute-0 podman[265429]: 2025-12-03 18:13:22.419991458 +0000 UTC m=+0.062833057 container create 85aa73eb9be75b7d48df56b5d2dd65df71691aeb5ef3b57d7805fff49b65d810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:13:22 compute-0 systemd[1]: Started libpod-conmon-85aa73eb9be75b7d48df56b5d2dd65df71691aeb5ef3b57d7805fff49b65d810.scope.
Dec  3 18:13:22 compute-0 podman[265429]: 2025-12-03 18:13:22.391880386 +0000 UTC m=+0.034722075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:13:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:13:22 compute-0 podman[265429]: 2025-12-03 18:13:22.522313423 +0000 UTC m=+0.165155042 container init 85aa73eb9be75b7d48df56b5d2dd65df71691aeb5ef3b57d7805fff49b65d810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:13:22 compute-0 podman[265429]: 2025-12-03 18:13:22.533347101 +0000 UTC m=+0.176188700 container start 85aa73eb9be75b7d48df56b5d2dd65df71691aeb5ef3b57d7805fff49b65d810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:13:22 compute-0 podman[265429]: 2025-12-03 18:13:22.53865708 +0000 UTC m=+0.181498709 container attach 85aa73eb9be75b7d48df56b5d2dd65df71691aeb5ef3b57d7805fff49b65d810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:13:22 compute-0 gallant_rosalind[265470]: 167 167
Dec  3 18:13:22 compute-0 systemd[1]: libpod-85aa73eb9be75b7d48df56b5d2dd65df71691aeb5ef3b57d7805fff49b65d810.scope: Deactivated successfully.
Dec  3 18:13:22 compute-0 conmon[265470]: conmon 85aa73eb9be75b7d48df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85aa73eb9be75b7d48df56b5d2dd65df71691aeb5ef3b57d7805fff49b65d810.scope/container/memory.events
Dec  3 18:13:22 compute-0 podman[265429]: 2025-12-03 18:13:22.542990795 +0000 UTC m=+0.185832434 container died 85aa73eb9be75b7d48df56b5d2dd65df71691aeb5ef3b57d7805fff49b65d810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 18:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5f991c6ddc005b2076e495a5d8d3c1d3de7319c847fdab1b1f500c8b44d9e77-merged.mount: Deactivated successfully.
Dec  3 18:13:22 compute-0 podman[265429]: 2025-12-03 18:13:22.607610214 +0000 UTC m=+0.250451833 container remove 85aa73eb9be75b7d48df56b5d2dd65df71691aeb5ef3b57d7805fff49b65d810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:13:22 compute-0 systemd[1]: libpod-conmon-85aa73eb9be75b7d48df56b5d2dd65df71691aeb5ef3b57d7805fff49b65d810.scope: Deactivated successfully.
Dec  3 18:13:22 compute-0 podman[265531]: 2025-12-03 18:13:22.853601498 +0000 UTC m=+0.089038793 container create 647ac2ed0f4c1e6daebd941118b76aeed31d004886cd38568dc76801b5b4eafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:13:22 compute-0 podman[265531]: 2025-12-03 18:13:22.829108393 +0000 UTC m=+0.064545738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:13:22 compute-0 systemd[1]: Started libpod-conmon-647ac2ed0f4c1e6daebd941118b76aeed31d004886cd38568dc76801b5b4eafd.scope.
Dec  3 18:13:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f77b0045285ae562948893ee05dbf58199e73455b7c6c300a1a17325900357/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f77b0045285ae562948893ee05dbf58199e73455b7c6c300a1a17325900357/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f77b0045285ae562948893ee05dbf58199e73455b7c6c300a1a17325900357/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f77b0045285ae562948893ee05dbf58199e73455b7c6c300a1a17325900357/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:13:23 compute-0 podman[265531]: 2025-12-03 18:13:23.028243418 +0000 UTC m=+0.263680753 container init 647ac2ed0f4c1e6daebd941118b76aeed31d004886cd38568dc76801b5b4eafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:13:23 compute-0 podman[265531]: 2025-12-03 18:13:23.058218487 +0000 UTC m=+0.293655822 container start 647ac2ed0f4c1e6daebd941118b76aeed31d004886cd38568dc76801b5b4eafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:13:23 compute-0 podman[265531]: 2025-12-03 18:13:23.065175846 +0000 UTC m=+0.300613191 container attach 647ac2ed0f4c1e6daebd941118b76aeed31d004886cd38568dc76801b5b4eafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:13:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:23 compute-0 python3.9[265610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764785601.586248-34-216924184406117/.source.conf _original_basename=ceph.conf follow=False checksum=78a6a010734ae0b757289673402732c00a3a0c95 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:13:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:13:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:24 compute-0 jolly_solomon[265577]: {
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "osd_id": 1,
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "type": "bluestore"
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:    },
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "osd_id": 2,
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "type": "bluestore"
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:    },
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "osd_id": 0,
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:        "type": "bluestore"
Dec  3 18:13:24 compute-0 jolly_solomon[265577]:    }
Dec  3 18:13:24 compute-0 jolly_solomon[265577]: }
Dec  3 18:13:24 compute-0 python3.9[265779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:24 compute-0 systemd[1]: libpod-647ac2ed0f4c1e6daebd941118b76aeed31d004886cd38568dc76801b5b4eafd.scope: Deactivated successfully.
Dec  3 18:13:24 compute-0 systemd[1]: libpod-647ac2ed0f4c1e6daebd941118b76aeed31d004886cd38568dc76801b5b4eafd.scope: Consumed 1.146s CPU time.
Dec  3 18:13:24 compute-0 podman[265791]: 2025-12-03 18:13:24.268874476 +0000 UTC m=+0.039122820 container died 647ac2ed0f4c1e6daebd941118b76aeed31d004886cd38568dc76801b5b4eafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:13:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7f77b0045285ae562948893ee05dbf58199e73455b7c6c300a1a17325900357-merged.mount: Deactivated successfully.
Dec  3 18:13:24 compute-0 podman[265791]: 2025-12-03 18:13:24.339760677 +0000 UTC m=+0.110009001 container remove 647ac2ed0f4c1e6daebd941118b76aeed31d004886cd38568dc76801b5b4eafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 18:13:24 compute-0 systemd[1]: libpod-conmon-647ac2ed0f4c1e6daebd941118b76aeed31d004886cd38568dc76801b5b4eafd.scope: Deactivated successfully.
Dec  3 18:13:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:13:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:13:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:13:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:13:24 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b4d877a0-7517-4119-910e-95ebcbfb1778 does not exist
Dec  3 18:13:24 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 5813aa2a-0572-48e5-a768-8dffa48ed936 does not exist
Dec  3 18:13:24 compute-0 python3.9[265976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764785603.5404181-34-118595453059512/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=3c08993ec5e3d5368a201a3a43b02eb90e20f943 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:25 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:13:25 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:13:25 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec  3 18:13:25 compute-0 systemd[1]: session-52.scope: Consumed 3.955s CPU time.
Dec  3 18:13:25 compute-0 systemd-logind[784]: Session 52 logged out. Waiting for processes to exit.
Dec  3 18:13:25 compute-0 systemd-logind[784]: Removed session 52.
Dec  3 18:13:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:29 compute-0 podman[158200]: time="2025-12-03T18:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:13:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:13:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6838 "" "Go-http-client/1.1"
Dec  3 18:13:29 compute-0 podman[266001]: 2025-12-03 18:13:29.936989058 +0000 UTC m=+0.106032686 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:13:29 compute-0 podman[266002]: 2025-12-03 18:13:29.942240195 +0000 UTC m=+0.097470777 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Dec  3 18:13:29 compute-0 podman[266012]: 2025-12-03 18:13:29.962395395 +0000 UTC m=+0.106430706 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, release=1214.1726694543, com.redhat.component=ubi9-container)
Dec  3 18:13:29 compute-0 podman[266011]: 2025-12-03 18:13:29.967120579 +0000 UTC m=+0.119558234 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:13:29 compute-0 podman[266004]: 2025-12-03 18:13:29.968206616 +0000 UTC m=+0.128544073 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  3 18:13:29 compute-0 podman[266003]: 2025-12-03 18:13:29.977309087 +0000 UTC m=+0.127747204 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:13:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:31 compute-0 systemd-logind[784]: New session 53 of user zuul.
Dec  3 18:13:31 compute-0 systemd[1]: Started Session 53 of User zuul.
Dec  3 18:13:31 compute-0 openstack_network_exporter[160319]: ERROR   18:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:13:31 compute-0 openstack_network_exporter[160319]: ERROR   18:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:13:31 compute-0 openstack_network_exporter[160319]: ERROR   18:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:13:31 compute-0 openstack_network_exporter[160319]: ERROR   18:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:13:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:13:31 compute-0 openstack_network_exporter[160319]: ERROR   18:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:13:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:13:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:32 compute-0 python3.9[266277]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:13:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:33 compute-0 python3.9[266433]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:13:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:34 compute-0 python3.9[266585]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:13:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:37 compute-0 python3.9[266735]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:13:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:38 compute-0 python3.9[266887]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  3 18:13:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:40 compute-0 python3.9[267039]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 18:13:41 compute-0 podman[267095]: 2025-12-03 18:13:41.759523474 +0000 UTC m=+0.147729269 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:13:42 compute-0 python3.9[267147]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 18:13:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:13:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:13:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:13:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:13:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:13:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:13:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:44 compute-0 python3.9[267300]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 18:13:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:46 compute-0 python3[267455]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  3 18:13:47 compute-0 python3.9[267607]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:48 compute-0 python3.9[267759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:48 compute-0 python3.9[267837]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:49 compute-0 python3.9[267989]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:50 compute-0 python3.9[268068]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4flv93w4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:51 compute-0 python3.9[268220]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:51 compute-0 python3.9[268298]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:52 compute-0 python3.9[268450]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:13:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:53 compute-0 python3[268603]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 18:13:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:54 compute-0 python3.9[268755]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:55 compute-0 python3.9[268833]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:56 compute-0 python3.9[268985]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:56 compute-0 python3.9[269063]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:57 compute-0 python3.9[269215]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:13:58 compute-0 python3.9[269293]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:13:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:13:59 compute-0 python3.9[269445]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:13:59 compute-0 podman[158200]: time="2025-12-03T18:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:13:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:13:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6846 "" "Go-http-client/1.1"
Dec  3 18:13:59 compute-0 python3.9[269523]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:00 compute-0 podman[269648]: 2025-12-03 18:14:00.708370021 +0000 UTC m=+0.124580027 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, vendor=Red Hat, Inc.)
Dec  3 18:14:00 compute-0 podman[269647]: 2025-12-03 18:14:00.70835721 +0000 UTC m=+0.120641070 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm)
Dec  3 18:14:00 compute-0 podman[269661]: 2025-12-03 18:14:00.711869276 +0000 UTC m=+0.099668612 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:14:00 compute-0 podman[269649]: 2025-12-03 18:14:00.740512091 +0000 UTC m=+0.142758068 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Dec  3 18:14:00 compute-0 podman[269655]: 2025-12-03 18:14:00.744304904 +0000 UTC m=+0.136418265 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:14:00 compute-0 podman[269671]: 2025-12-03 18:14:00.744337254 +0000 UTC m=+0.123926581 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.tags=base rhel9, name=ubi9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public)
Dec  3 18:14:00 compute-0 python3.9[269785]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:14:01 compute-0 openstack_network_exporter[160319]: ERROR   18:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:14:01 compute-0 openstack_network_exporter[160319]: ERROR   18:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:14:01 compute-0 openstack_network_exporter[160319]: ERROR   18:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:14:01 compute-0 openstack_network_exporter[160319]: ERROR   18:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:14:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:14:01 compute-0 openstack_network_exporter[160319]: ERROR   18:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:14:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:14:01 compute-0 python3.9[269873]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:02 compute-0 python3.9[270025]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:14:03 compute-0 python3.9[270180]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:04 compute-0 python3.9[270332]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:14:06 compute-0 python3.9[270485]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:14:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:07 compute-0 python3.9[270637]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:08 compute-0 python3.9[270787]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:14:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:09 compute-0 python3.9[270940]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:14:09 compute-0 ovs-vsctl[270941]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  3 18:14:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:10 compute-0 python3.9[271093]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:14:11 compute-0 python3.9[271246]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:14:11 compute-0 podman[271296]: 2025-12-03 18:14:11.910112882 +0000 UTC m=+0.077280639 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:14:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:12 compute-0 python3.9[271425]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:14:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:13 compute-0 python3.9[271577]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:14:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:14:13
Dec  3 18:14:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:14:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:14:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log']
Dec  3 18:14:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:14:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:14:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:14:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:14:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:14:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:14:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:14:14 compute-0 python3.9[271655]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:14:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:14 compute-0 python3.9[271807]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:14:15 compute-0 python3.9[271885]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:14:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:16 compute-0 python3.9[272037]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:17 compute-0 python3.9[272189]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:14:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:18 compute-0 python3.9[272267]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:19 compute-0 python3.9[272419]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:14:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:14:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5388 writes, 23K keys, 5388 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5388 writes, 766 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5388 writes, 23K keys, 5388 commit groups, 1.0 writes per commit group, ingest: 18.43 MB, 0.03 MB/s#012Interval WAL: 5388 writes, 766 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fc1507add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fc1507add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  3 18:14:20 compute-0 python3.9[272498]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:20 compute-0 python3.9[272650]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:14:20 compute-0 systemd[1]: Reloading.
Dec  3 18:14:21 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:14:21 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:14:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:22 compute-0 python3.9[272840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:14:22 compute-0 python3.9[272918]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:14:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:14:23 compute-0 python3.9[273072]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:14:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:24 compute-0 python3.9[273150]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:25 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:14:25 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:14:25 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:14:25 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:14:25 compute-0 python3.9[273421]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:14:25 compute-0 systemd[1]: Reloading.
Dec  3 18:14:25 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:14:25 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:14:25 compute-0 systemd[1]: Starting Create netns directory...
Dec  3 18:14:26 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 18:14:26 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 18:14:26 compute-0 systemd[1]: Finished Create netns directory.
Dec  3 18:14:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:26 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:14:26 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:14:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:14:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:14:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:14:26 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:14:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:14:26 compute-0 python3.9[273745]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:14:26 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:14:26 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 8eb92cb1-9cd2-44c7-aa2d-587da6d713b9 does not exist
Dec  3 18:14:26 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev ee86546c-2d18-4b75-952e-3a355985cc2c does not exist
Dec  3 18:14:26 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 57ce6b7e-c470-4318-932e-4ada3d34a554 does not exist
Dec  3 18:14:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:14:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:14:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:14:26 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:14:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:14:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:14:27 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:14:27 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:14:27 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:14:27 compute-0 python3.9[274020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:14:27 compute-0 podman[274035]: 2025-12-03 18:14:27.779169901 +0000 UTC m=+0.033143535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:14:27 compute-0 podman[274035]: 2025-12-03 18:14:27.897392483 +0000 UTC m=+0.151366087 container create 12357176dabd60f650aa5f65b49866c108b8c0b665af3827982b23ffdbe88eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:14:27 compute-0 systemd[1]: Started libpod-conmon-12357176dabd60f650aa5f65b49866c108b8c0b665af3827982b23ffdbe88eb7.scope.
Dec  3 18:14:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:14:28 compute-0 podman[274035]: 2025-12-03 18:14:28.055289947 +0000 UTC m=+0.309263581 container init 12357176dabd60f650aa5f65b49866c108b8c0b665af3827982b23ffdbe88eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:14:28 compute-0 podman[274035]: 2025-12-03 18:14:28.064808868 +0000 UTC m=+0.318782472 container start 12357176dabd60f650aa5f65b49866c108b8c0b665af3827982b23ffdbe88eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:14:28 compute-0 podman[274035]: 2025-12-03 18:14:28.07023481 +0000 UTC m=+0.324208424 container attach 12357176dabd60f650aa5f65b49866c108b8c0b665af3827982b23ffdbe88eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:14:28 compute-0 brave_ritchie[274074]: 167 167
Dec  3 18:14:28 compute-0 systemd[1]: libpod-12357176dabd60f650aa5f65b49866c108b8c0b665af3827982b23ffdbe88eb7.scope: Deactivated successfully.
Dec  3 18:14:28 compute-0 podman[274035]: 2025-12-03 18:14:28.074225347 +0000 UTC m=+0.328198951 container died 12357176dabd60f650aa5f65b49866c108b8c0b665af3827982b23ffdbe88eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 18:14:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3362de5edcba3c65b041bbd5969874da4d5f17fa9e9ebb53031707d19410731e-merged.mount: Deactivated successfully.
Dec  3 18:14:28 compute-0 podman[274035]: 2025-12-03 18:14:28.181282417 +0000 UTC m=+0.435256021 container remove 12357176dabd60f650aa5f65b49866c108b8c0b665af3827982b23ffdbe88eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:14:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:14:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 6596 writes, 27K keys, 6596 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6596 writes, 1148 syncs, 5.75 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6596 writes, 27K keys, 6596 commit groups, 1.0 writes per commit group, ingest: 19.28 MB, 0.03 MB/s#012Interval WAL: 6596 writes, 1148 syncs, 5.75 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5562f64d34b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5562f64d34b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  3 18:14:28 compute-0 systemd[1]: libpod-conmon-12357176dabd60f650aa5f65b49866c108b8c0b665af3827982b23ffdbe88eb7.scope: Deactivated successfully.
Dec  3 18:14:28 compute-0 python3.9[274143]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ovn_controller/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ovn_controller/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:14:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:28 compute-0 podman[274151]: 2025-12-03 18:14:28.401954265 +0000 UTC m=+0.094408033 container create a0689081f70e16f9330877530a9444ee20505818450576bd539d591edcee8193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 18:14:28 compute-0 podman[274151]: 2025-12-03 18:14:28.336137217 +0000 UTC m=+0.028591005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:14:28 compute-0 systemd[1]: Started libpod-conmon-a0689081f70e16f9330877530a9444ee20505818450576bd539d591edcee8193.scope.
Dec  3 18:14:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d30df66e255e948215c5e9565136de02e44fe77f0cac2e9d098d4a74bd0c438/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d30df66e255e948215c5e9565136de02e44fe77f0cac2e9d098d4a74bd0c438/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d30df66e255e948215c5e9565136de02e44fe77f0cac2e9d098d4a74bd0c438/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d30df66e255e948215c5e9565136de02e44fe77f0cac2e9d098d4a74bd0c438/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d30df66e255e948215c5e9565136de02e44fe77f0cac2e9d098d4a74bd0c438/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:28 compute-0 podman[274151]: 2025-12-03 18:14:28.521764815 +0000 UTC m=+0.214218583 container init a0689081f70e16f9330877530a9444ee20505818450576bd539d591edcee8193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:14:28 compute-0 podman[274151]: 2025-12-03 18:14:28.536694537 +0000 UTC m=+0.229148305 container start a0689081f70e16f9330877530a9444ee20505818450576bd539d591edcee8193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 18:14:28 compute-0 podman[274151]: 2025-12-03 18:14:28.577219891 +0000 UTC m=+0.269673699 container attach a0689081f70e16f9330877530a9444ee20505818450576bd539d591edcee8193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:14:29 compute-0 python3.9[274323]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:14:29 compute-0 angry_raman[274191]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:14:29 compute-0 angry_raman[274191]: --> relative data size: 1.0
Dec  3 18:14:29 compute-0 angry_raman[274191]: --> All data devices are unavailable
Dec  3 18:14:29 compute-0 systemd[1]: libpod-a0689081f70e16f9330877530a9444ee20505818450576bd539d591edcee8193.scope: Deactivated successfully.
Dec  3 18:14:29 compute-0 podman[274151]: 2025-12-03 18:14:29.635358257 +0000 UTC m=+1.327812045 container died a0689081f70e16f9330877530a9444ee20505818450576bd539d591edcee8193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 18:14:29 compute-0 systemd[1]: libpod-a0689081f70e16f9330877530a9444ee20505818450576bd539d591edcee8193.scope: Consumed 1.039s CPU time.
Dec  3 18:14:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d30df66e255e948215c5e9565136de02e44fe77f0cac2e9d098d4a74bd0c438-merged.mount: Deactivated successfully.
Dec  3 18:14:29 compute-0 podman[274151]: 2025-12-03 18:14:29.7092017 +0000 UTC m=+1.401655468 container remove a0689081f70e16f9330877530a9444ee20505818450576bd539d591edcee8193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:14:29 compute-0 systemd[1]: libpod-conmon-a0689081f70e16f9330877530a9444ee20505818450576bd539d591edcee8193.scope: Deactivated successfully.
Dec  3 18:14:29 compute-0 podman[158200]: time="2025-12-03T18:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:14:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:14:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6837 "" "Go-http-client/1.1"
Dec  3 18:14:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:30 compute-0 python3.9[274611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:14:30 compute-0 podman[274679]: 2025-12-03 18:14:30.520853631 +0000 UTC m=+0.063328230 container create 0e14ce600049ad267876d8d59a446d1c56d3274522c9c026dbda5fc9561f063d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:14:30 compute-0 systemd[1]: Started libpod-conmon-0e14ce600049ad267876d8d59a446d1c56d3274522c9c026dbda5fc9561f063d.scope.
Dec  3 18:14:30 compute-0 podman[274679]: 2025-12-03 18:14:30.495898435 +0000 UTC m=+0.038373074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:14:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:14:30 compute-0 podman[274679]: 2025-12-03 18:14:30.638879487 +0000 UTC m=+0.181354126 container init 0e14ce600049ad267876d8d59a446d1c56d3274522c9c026dbda5fc9561f063d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 18:14:30 compute-0 podman[274679]: 2025-12-03 18:14:30.650626201 +0000 UTC m=+0.193100810 container start 0e14ce600049ad267876d8d59a446d1c56d3274522c9c026dbda5fc9561f063d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 18:14:30 compute-0 podman[274679]: 2025-12-03 18:14:30.656236958 +0000 UTC m=+0.198711597 container attach 0e14ce600049ad267876d8d59a446d1c56d3274522c9c026dbda5fc9561f063d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 18:14:30 compute-0 hardcore_hypatia[274731]: 167 167
Dec  3 18:14:30 compute-0 podman[274679]: 2025-12-03 18:14:30.65959304 +0000 UTC m=+0.202067659 container died 0e14ce600049ad267876d8d59a446d1c56d3274522c9c026dbda5fc9561f063d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:14:30 compute-0 systemd[1]: libpod-0e14ce600049ad267876d8d59a446d1c56d3274522c9c026dbda5fc9561f063d.scope: Deactivated successfully.
Dec  3 18:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e2b7722e947a85e8c194b7fcc173a957713c6c051acaddf81a88c516dd20165-merged.mount: Deactivated successfully.
Dec  3 18:14:30 compute-0 podman[274679]: 2025-12-03 18:14:30.719035923 +0000 UTC m=+0.261510532 container remove 0e14ce600049ad267876d8d59a446d1c56d3274522c9c026dbda5fc9561f063d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 18:14:30 compute-0 systemd[1]: libpod-conmon-0e14ce600049ad267876d8d59a446d1c56d3274522c9c026dbda5fc9561f063d.scope: Deactivated successfully.
Dec  3 18:14:30 compute-0 podman[274763]: 2025-12-03 18:14:30.875684157 +0000 UTC m=+0.102679074 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible)
Dec  3 18:14:30 compute-0 podman[274772]: 2025-12-03 18:14:30.895860177 +0000 UTC m=+0.097706793 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 18:14:30 compute-0 podman[274765]: 2025-12-03 18:14:30.901469183 +0000 UTC m=+0.118232912 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:14:30 compute-0 python3.9[274748]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ovn_controller.json _original_basename=.96yi9182 recurse=False state=file path=/var/lib/kolla/config_files/ovn_controller.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:30 compute-0 podman[274777]: 2025-12-03 18:14:30.922926584 +0000 UTC m=+0.127842466 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, release-0.7.12=, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 18:14:30 compute-0 podman[274764]: 2025-12-03 18:14:30.932715932 +0000 UTC m=+0.142785398 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=)
Dec  3 18:14:30 compute-0 podman[274838]: 2025-12-03 18:14:30.946774834 +0000 UTC m=+0.067733987 container create 91599cd20ed1ddb61dc69c8c29be294c28fd6aee598b85d21bd7b8570afcfa84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 18:14:30 compute-0 podman[274766]: 2025-12-03 18:14:30.95649966 +0000 UTC m=+0.169238371 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  3 18:14:31 compute-0 systemd[1]: Started libpod-conmon-91599cd20ed1ddb61dc69c8c29be294c28fd6aee598b85d21bd7b8570afcfa84.scope.
Dec  3 18:14:31 compute-0 podman[274838]: 2025-12-03 18:14:30.922396131 +0000 UTC m=+0.043355304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:14:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51eb2e00541efebd7838146c9e84dc5e16709a2dad07acc6fd665e2c3786b858/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51eb2e00541efebd7838146c9e84dc5e16709a2dad07acc6fd665e2c3786b858/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51eb2e00541efebd7838146c9e84dc5e16709a2dad07acc6fd665e2c3786b858/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51eb2e00541efebd7838146c9e84dc5e16709a2dad07acc6fd665e2c3786b858/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:31 compute-0 podman[274838]: 2025-12-03 18:14:31.0948721 +0000 UTC m=+0.215831283 container init 91599cd20ed1ddb61dc69c8c29be294c28fd6aee598b85d21bd7b8570afcfa84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:14:31 compute-0 podman[274838]: 2025-12-03 18:14:31.10477233 +0000 UTC m=+0.225731493 container start 91599cd20ed1ddb61dc69c8c29be294c28fd6aee598b85d21bd7b8570afcfa84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 18:14:31 compute-0 podman[274838]: 2025-12-03 18:14:31.109482004 +0000 UTC m=+0.230441187 container attach 91599cd20ed1ddb61dc69c8c29be294c28fd6aee598b85d21bd7b8570afcfa84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:14:31 compute-0 openstack_network_exporter[160319]: ERROR   18:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:14:31 compute-0 openstack_network_exporter[160319]: ERROR   18:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:14:31 compute-0 openstack_network_exporter[160319]: ERROR   18:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:14:31 compute-0 openstack_network_exporter[160319]: ERROR   18:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:14:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:14:31 compute-0 openstack_network_exporter[160319]: ERROR   18:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:14:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:14:31 compute-0 python3.9[275055]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]: {
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:    "0": [
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:        {
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "devices": [
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "/dev/loop3"
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            ],
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_name": "ceph_lv0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_size": "21470642176",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "name": "ceph_lv0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "tags": {
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.cluster_name": "ceph",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.crush_device_class": "",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.encrypted": "0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.osd_id": "0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.type": "block",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.vdo": "0"
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            },
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "type": "block",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "vg_name": "ceph_vg0"
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:        }
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:    ],
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:    "1": [
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:        {
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "devices": [
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "/dev/loop4"
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            ],
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_name": "ceph_lv1",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_size": "21470642176",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "name": "ceph_lv1",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "tags": {
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.cluster_name": "ceph",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.crush_device_class": "",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.encrypted": "0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.osd_id": "1",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.type": "block",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.vdo": "0"
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            },
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "type": "block",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "vg_name": "ceph_vg1"
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:        }
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:    ],
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:    "2": [
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:        {
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "devices": [
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "/dev/loop5"
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            ],
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_name": "ceph_lv2",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_size": "21470642176",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "name": "ceph_lv2",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "tags": {
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.cluster_name": "ceph",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.crush_device_class": "",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.encrypted": "0",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.osd_id": "2",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.type": "block",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:                "ceph.vdo": "0"
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            },
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "type": "block",
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:            "vg_name": "ceph_vg2"
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:        }
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]:    ]
Dec  3 18:14:31 compute-0 cranky_brahmagupta[274911]: }
Dec  3 18:14:32 compute-0 systemd[1]: libpod-91599cd20ed1ddb61dc69c8c29be294c28fd6aee598b85d21bd7b8570afcfa84.scope: Deactivated successfully.
Dec  3 18:14:32 compute-0 podman[274838]: 2025-12-03 18:14:32.016208254 +0000 UTC m=+1.137167497 container died 91599cd20ed1ddb61dc69c8c29be294c28fd6aee598b85d21bd7b8570afcfa84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 18:14:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-51eb2e00541efebd7838146c9e84dc5e16709a2dad07acc6fd665e2c3786b858-merged.mount: Deactivated successfully.
Dec  3 18:14:32 compute-0 podman[274838]: 2025-12-03 18:14:32.202338413 +0000 UTC m=+1.323297566 container remove 91599cd20ed1ddb61dc69c8c29be294c28fd6aee598b85d21bd7b8570afcfa84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Dec  3 18:14:32 compute-0 systemd[1]: libpod-conmon-91599cd20ed1ddb61dc69c8c29be294c28fd6aee598b85d21bd7b8570afcfa84.scope: Deactivated successfully.
Dec  3 18:14:33 compute-0 podman[275389]: 2025-12-03 18:14:33.022229103 +0000 UTC m=+0.046667594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:14:33 compute-0 podman[275389]: 2025-12-03 18:14:33.141274684 +0000 UTC m=+0.165713135 container create e6edc955b800c02c3cd456d4ebb4968098aefe1b3145b1ed6e11943710dc6b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:14:33 compute-0 systemd[1]: Started libpod-conmon-e6edc955b800c02c3cd456d4ebb4968098aefe1b3145b1ed6e11943710dc6b3c.scope.
Dec  3 18:14:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:14:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:33 compute-0 podman[275389]: 2025-12-03 18:14:33.40054729 +0000 UTC m=+0.424985761 container init e6edc955b800c02c3cd456d4ebb4968098aefe1b3145b1ed6e11943710dc6b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:14:33 compute-0 podman[275389]: 2025-12-03 18:14:33.418837084 +0000 UTC m=+0.443275565 container start e6edc955b800c02c3cd456d4ebb4968098aefe1b3145b1ed6e11943710dc6b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:14:33 compute-0 zen_mirzakhani[275458]: 167 167
Dec  3 18:14:33 compute-0 systemd[1]: libpod-e6edc955b800c02c3cd456d4ebb4968098aefe1b3145b1ed6e11943710dc6b3c.scope: Deactivated successfully.
Dec  3 18:14:33 compute-0 podman[275389]: 2025-12-03 18:14:33.437481317 +0000 UTC m=+0.461919798 container attach e6edc955b800c02c3cd456d4ebb4968098aefe1b3145b1ed6e11943710dc6b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:14:33 compute-0 podman[275389]: 2025-12-03 18:14:33.43803795 +0000 UTC m=+0.462476421 container died e6edc955b800c02c3cd456d4ebb4968098aefe1b3145b1ed6e11943710dc6b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f01f25ac3e6ee9660c45ce51f5460e0dfc4412f931d8786be3392d86b958caa2-merged.mount: Deactivated successfully.
Dec  3 18:14:33 compute-0 podman[275389]: 2025-12-03 18:14:33.506421221 +0000 UTC m=+0.530859682 container remove e6edc955b800c02c3cd456d4ebb4968098aefe1b3145b1ed6e11943710dc6b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:14:33 compute-0 systemd[1]: libpod-conmon-e6edc955b800c02c3cd456d4ebb4968098aefe1b3145b1ed6e11943710dc6b3c.scope: Deactivated successfully.
Dec  3 18:14:33 compute-0 podman[275506]: 2025-12-03 18:14:33.814586114 +0000 UTC m=+0.141199929 container create 0e50b72c1e9bbe68f711081530c1c99c99a34d5ee6c586044894661b602eeb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 18:14:33 compute-0 podman[275506]: 2025-12-03 18:14:33.732568132 +0000 UTC m=+0.059181977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:14:33 compute-0 systemd[1]: Started libpod-conmon-0e50b72c1e9bbe68f711081530c1c99c99a34d5ee6c586044894661b602eeb84.scope.
Dec  3 18:14:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb50dc8526b4de942ceb3ef890bdee674bf4d37547770cc3dff073b247dffbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb50dc8526b4de942ceb3ef890bdee674bf4d37547770cc3dff073b247dffbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb50dc8526b4de942ceb3ef890bdee674bf4d37547770cc3dff073b247dffbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb50dc8526b4de942ceb3ef890bdee674bf4d37547770cc3dff073b247dffbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:14:34 compute-0 podman[275506]: 2025-12-03 18:14:34.021360476 +0000 UTC m=+0.347974321 container init 0e50b72c1e9bbe68f711081530c1c99c99a34d5ee6c586044894661b602eeb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 18:14:34 compute-0 podman[275506]: 2025-12-03 18:14:34.030922357 +0000 UTC m=+0.357536172 container start 0e50b72c1e9bbe68f711081530c1c99c99a34d5ee6c586044894661b602eeb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:14:34 compute-0 podman[275506]: 2025-12-03 18:14:34.038717837 +0000 UTC m=+0.365331752 container attach 0e50b72c1e9bbe68f711081530c1c99c99a34d5ee6c586044894661b602eeb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:14:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:14:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5526 writes, 23K keys, 5526 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5526 writes, 821 syncs, 6.73 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5526 writes, 23K keys, 5526 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s#012Interval WAL: 5526 writes, 821 syncs, 6.73 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab999451f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab999451f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  3 18:14:34 compute-0 python3.9[275655]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]: {
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "osd_id": 1,
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "type": "bluestore"
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:    },
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "osd_id": 2,
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "type": "bluestore"
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:    },
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "osd_id": 0,
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:        "type": "bluestore"
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]:    }
Dec  3 18:14:35 compute-0 goofy_jepsen[275554]: }
Dec  3 18:14:35 compute-0 systemd[1]: libpod-0e50b72c1e9bbe68f711081530c1c99c99a34d5ee6c586044894661b602eeb84.scope: Deactivated successfully.
Dec  3 18:14:35 compute-0 systemd[1]: libpod-0e50b72c1e9bbe68f711081530c1c99c99a34d5ee6c586044894661b602eeb84.scope: Consumed 1.013s CPU time.
Dec  3 18:14:35 compute-0 podman[275506]: 2025-12-03 18:14:35.040485604 +0000 UTC m=+1.367099459 container died 0e50b72c1e9bbe68f711081530c1c99c99a34d5ee6c586044894661b602eeb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:14:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cb50dc8526b4de942ceb3ef890bdee674bf4d37547770cc3dff073b247dffbd-merged.mount: Deactivated successfully.
Dec  3 18:14:35 compute-0 podman[275506]: 2025-12-03 18:14:35.114097511 +0000 UTC m=+1.440711326 container remove 0e50b72c1e9bbe68f711081530c1c99c99a34d5ee6c586044894661b602eeb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:14:35 compute-0 systemd[1]: libpod-conmon-0e50b72c1e9bbe68f711081530c1c99c99a34d5ee6c586044894661b602eeb84.scope: Deactivated successfully.
Dec  3 18:14:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:14:35 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:14:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:14:35 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:14:35 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 807d31ce-b447-4c2a-a3c3-8306239471f2 does not exist
Dec  3 18:14:35 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 283cb4ce-9a02-4c03-abae-13c96af26c52 does not exist
Dec  3 18:14:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:14:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:14:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:36 compute-0 python3.9[275898]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 18:14:37 compute-0 ceph-mgr[193091]: [devicehealth INFO root] Check health
Dec  3 18:14:37 compute-0 python3.9[276050]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  3 18:14:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:39 compute-0 python3[276228]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 18:14:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:40 compute-0 python3[276228]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",#012          "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-12-01T06:38:47.246477714Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251125",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 345722821,#012          "VirtualSize": 345722821,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012                    "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012                    "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",#012                    "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251125",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-11-25T04:02:36.223494528Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T04:02:36.223562059Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T04:02:39.054452717Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025707917Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025744608Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025767729Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025791379Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.02581523Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025867611Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.469442331Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:10:02.029095017Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:10:05.672474685Z",#012                    "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-l
Dec  3 18:14:41 compute-0 python3.9[276435]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:14:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:42 compute-0 podman[276561]: 2025-12-03 18:14:42.359361172 +0000 UTC m=+0.104382326 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:14:42 compute-0 python3.9[276610]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:43 compute-0 python3.9[276687]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:14:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:14:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:14:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:14:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:14:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:14:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:14:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:44 compute-0 python3.9[276838]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764785683.2033296-536-174537883915282/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:14:44 compute-0 python3.9[276914]: ansible-systemd Invoked with state=started name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:14:46 compute-0 python3.9[277068]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:14:46 compute-0 ovs-vsctl[277069]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  3 18:14:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:46 compute-0 python3.9[277221]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:14:46 compute-0 ovs-vsctl[277223]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  3 18:14:48 compute-0 python3.9[277376]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:14:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:48 compute-0 ovs-vsctl[277377]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  3 18:14:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:48 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec  3 18:14:48 compute-0 systemd[1]: session-53.scope: Consumed 1min 365ms CPU time.
Dec  3 18:14:48 compute-0 systemd-logind[784]: Session 53 logged out. Waiting for processes to exit.
Dec  3 18:14:48 compute-0 systemd-logind[784]: Removed session 53.
Dec  3 18:14:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:54 compute-0 systemd-logind[784]: New session 54 of user zuul.
Dec  3 18:14:54 compute-0 systemd[1]: Started Session 54 of User zuul.
Dec  3 18:14:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:55 compute-0 python3.9[277556]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:14:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:57 compute-0 python3.9[277712]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:14:58 compute-0 python3.9[277864]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:14:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:14:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:14:58 compute-0 python3.9[278016]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:14:59 compute-0 podman[158200]: time="2025-12-03T18:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:14:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:14:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6831 "" "Go-http-client/1.1"
Dec  3 18:15:00 compute-0 python3.9[278168]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:01 compute-0 python3.9[278321]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:01 compute-0 podman[278320]: 2025-12-03 18:15:01.099976733 +0000 UTC m=+0.166459953 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec  3 18:15:01 compute-0 podman[278348]: 2025-12-03 18:15:01.234798387 +0000 UTC m=+0.109712805 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:15:01 compute-0 podman[278342]: 2025-12-03 18:15:01.242934404 +0000 UTC m=+0.125619521 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:15:01 compute-0 podman[278349]: 2025-12-03 18:15:01.243249402 +0000 UTC m=+0.100278647 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.29.0, name=ubi9, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.4, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container)
Dec  3 18:15:01 compute-0 podman[278340]: 2025-12-03 18:15:01.249884813 +0000 UTC m=+0.130147561 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=)
Dec  3 18:15:01 compute-0 podman[278341]: 2025-12-03 18:15:01.251566454 +0000 UTC m=+0.142754898 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 18:15:01 compute-0 openstack_network_exporter[160319]: ERROR   18:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:15:01 compute-0 openstack_network_exporter[160319]: ERROR   18:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:15:01 compute-0 openstack_network_exporter[160319]: ERROR   18:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:15:01 compute-0 openstack_network_exporter[160319]: ERROR   18:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:15:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:15:01 compute-0 openstack_network_exporter[160319]: ERROR   18:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:15:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:15:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:02 compute-0 python3.9[278591]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:15:03 compute-0 python3.9[278743]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  3 18:15:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.701 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.701 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:15:03.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:15:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:12 compute-0 podman[278852]: 2025-12-03 18:15:12.951315498 +0000 UTC m=+0.098567514 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:15:13 compute-0 python3.9[278918]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:15:13
Dec  3 18:15:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:15:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:15:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.control', 'vms', '.rgw.root']
Dec  3 18:15:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:15:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:15:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:15:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:15:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:15:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:15:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:15:14 compute-0 python3.9[279039]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764785712.2969892-86-227004984214378/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:15 compute-0 python3.9[279189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:15 compute-0 python3.9[279310]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764785714.5389836-101-160755903569296/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:17 compute-0 python3.9[279462]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 18:15:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:18 compute-0 python3.9[279546]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 18:15:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:21 compute-0 python3.9[279700]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 18:15:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:22 compute-0 python3.9[279853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:23 compute-0 python3.9[279974]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764785722.2742913-138-82607677230550/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:15:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:15:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:24 compute-0 python3.9[280124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:25 compute-0 python3.9[280245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764785723.9040487-138-281141722084526/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:26 compute-0 python3.9[280395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:27 compute-0 python3.9[280516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764785726.1200078-182-111836582457645/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:28 compute-0 python3.9[280666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:29 compute-0 python3.9[280787]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764785727.9112499-182-242666707039962/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:29 compute-0 podman[158200]: time="2025-12-03T18:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:15:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:15:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6845 "" "Go-http-client/1.1"
Dec  3 18:15:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:30 compute-0 python3.9[280937]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:15:31 compute-0 podman[281063]: 2025-12-03 18:15:31.293869543 +0000 UTC m=+0.126721648 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  3 18:15:31 compute-0 podman[281110]: 2025-12-03 18:15:31.397225975 +0000 UTC m=+0.087943675 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:15:31 compute-0 openstack_network_exporter[160319]: ERROR   18:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:15:31 compute-0 openstack_network_exporter[160319]: ERROR   18:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:15:31 compute-0 podman[281108]: 2025-12-03 18:15:31.416730229 +0000 UTC m=+0.116590147 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6)
Dec  3 18:15:31 compute-0 openstack_network_exporter[160319]: ERROR   18:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:15:31 compute-0 openstack_network_exporter[160319]: ERROR   18:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:15:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:15:31 compute-0 openstack_network_exporter[160319]: ERROR   18:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:15:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:15:31 compute-0 podman[281111]: 2025-12-03 18:15:31.437961905 +0000 UTC m=+0.123843680 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.29.0, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30)
Dec  3 18:15:31 compute-0 podman[281109]: 2025-12-03 18:15:31.440903646 +0000 UTC m=+0.144791080 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 18:15:31 compute-0 podman[281112]: 2025-12-03 18:15:31.451280813 +0000 UTC m=+0.134703959 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 18:15:31 compute-0 python3.9[281122]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:32 compute-0 python3.9[281361]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:32 compute-0 python3.9[281439]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:33 compute-0 python3.9[281591]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:34 compute-0 python3.9[281669]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:35 compute-0 python3.9[281821]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:15:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:36 compute-0 python3.9[281977]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:36 compute-0 python3.9[282164]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:15:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 18:15:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:15:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:15:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:15:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:15:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:15:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:15:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:15:37 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 89ef0b7f-991a-4d1d-b8f6-0bfb3afe5a8b does not exist
Dec  3 18:15:37 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 026b5e3b-9d26-4da3-97a1-fb19f02e3a7e does not exist
Dec  3 18:15:37 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 460c2dc7-c21c-4364-b9a8-dbb29e91e911 does not exist
Dec  3 18:15:37 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:15:37 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:15:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:15:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:15:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:15:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:15:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:15:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:15:38 compute-0 python3.9[282431]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:38 compute-0 podman[282497]: 2025-12-03 18:15:38.391919272 +0000 UTC m=+0.077642169 container create 2c0e6471ae1271fcb28149e68bee9f7fef14d5e22b9d60d10bd0604fe116e3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:15:38 compute-0 systemd[1]: Started libpod-conmon-2c0e6471ae1271fcb28149e68bee9f7fef14d5e22b9d60d10bd0604fe116e3c9.scope.
Dec  3 18:15:38 compute-0 podman[282497]: 2025-12-03 18:15:38.364276164 +0000 UTC m=+0.049999081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:15:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:15:38 compute-0 podman[282497]: 2025-12-03 18:15:38.524993841 +0000 UTC m=+0.210716768 container init 2c0e6471ae1271fcb28149e68bee9f7fef14d5e22b9d60d10bd0604fe116e3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_morse, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:15:38 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:15:38 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:15:38 compute-0 podman[282497]: 2025-12-03 18:15:38.540160523 +0000 UTC m=+0.225883430 container start 2c0e6471ae1271fcb28149e68bee9f7fef14d5e22b9d60d10bd0604fe116e3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_morse, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 18:15:38 compute-0 podman[282497]: 2025-12-03 18:15:38.545628503 +0000 UTC m=+0.231351430 container attach 2c0e6471ae1271fcb28149e68bee9f7fef14d5e22b9d60d10bd0604fe116e3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:15:38 compute-0 goofy_morse[282552]: 167 167
Dec  3 18:15:38 compute-0 systemd[1]: libpod-2c0e6471ae1271fcb28149e68bee9f7fef14d5e22b9d60d10bd0604fe116e3c9.scope: Deactivated successfully.
Dec  3 18:15:38 compute-0 podman[282497]: 2025-12-03 18:15:38.550891889 +0000 UTC m=+0.236614806 container died 2c0e6471ae1271fcb28149e68bee9f7fef14d5e22b9d60d10bd0604fe116e3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-052e7158696eff816c20f909d9a37b4d689f8b8146f4d82fcfa3e26149add669-merged.mount: Deactivated successfully.
Dec  3 18:15:38 compute-0 podman[282497]: 2025-12-03 18:15:38.631053757 +0000 UTC m=+0.316776664 container remove 2c0e6471ae1271fcb28149e68bee9f7fef14d5e22b9d60d10bd0604fe116e3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_morse, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:15:38 compute-0 systemd[1]: libpod-conmon-2c0e6471ae1271fcb28149e68bee9f7fef14d5e22b9d60d10bd0604fe116e3c9.scope: Deactivated successfully.
Dec  3 18:15:38 compute-0 python3.9[282566]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:15:38 compute-0 podman[282588]: 2025-12-03 18:15:38.860066251 +0000 UTC m=+0.083038109 container create ab05c3550d1f9377b36658cbff1b7e4a94587a98eb4cab5b2618f6988237282b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:15:38 compute-0 podman[282588]: 2025-12-03 18:15:38.826043111 +0000 UTC m=+0.049015019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:15:38 compute-0 systemd[1]: Started libpod-conmon-ab05c3550d1f9377b36658cbff1b7e4a94587a98eb4cab5b2618f6988237282b.scope.
Dec  3 18:15:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a6a4b44cc839f790fb1d5bf282122da1f27c13fe1645fe1798755fba21d412/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a6a4b44cc839f790fb1d5bf282122da1f27c13fe1645fe1798755fba21d412/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a6a4b44cc839f790fb1d5bf282122da1f27c13fe1645fe1798755fba21d412/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a6a4b44cc839f790fb1d5bf282122da1f27c13fe1645fe1798755fba21d412/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a6a4b44cc839f790fb1d5bf282122da1f27c13fe1645fe1798755fba21d412/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:38 compute-0 podman[282588]: 2025-12-03 18:15:38.993732584 +0000 UTC m=+0.216704452 container init ab05c3550d1f9377b36658cbff1b7e4a94587a98eb4cab5b2618f6988237282b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:15:39 compute-0 podman[282588]: 2025-12-03 18:15:39.007536553 +0000 UTC m=+0.230508391 container start ab05c3550d1f9377b36658cbff1b7e4a94587a98eb4cab5b2618f6988237282b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:15:39 compute-0 podman[282588]: 2025-12-03 18:15:39.012713976 +0000 UTC m=+0.235685814 container attach ab05c3550d1f9377b36658cbff1b7e4a94587a98eb4cab5b2618f6988237282b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:15:39 compute-0 python3.9[282759]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:15:39 compute-0 systemd[1]: Reloading.
Dec  3 18:15:40 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:15:40 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:15:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:40 compute-0 beautiful_mirzakhani[282627]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:15:40 compute-0 beautiful_mirzakhani[282627]: --> relative data size: 1.0
Dec  3 18:15:40 compute-0 beautiful_mirzakhani[282627]: --> All data devices are unavailable
Dec  3 18:15:40 compute-0 podman[282588]: 2025-12-03 18:15:40.232782882 +0000 UTC m=+1.455754730 container died ab05c3550d1f9377b36658cbff1b7e4a94587a98eb4cab5b2618f6988237282b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:15:40 compute-0 systemd[1]: libpod-ab05c3550d1f9377b36658cbff1b7e4a94587a98eb4cab5b2618f6988237282b.scope: Deactivated successfully.
Dec  3 18:15:40 compute-0 systemd[1]: libpod-ab05c3550d1f9377b36658cbff1b7e4a94587a98eb4cab5b2618f6988237282b.scope: Consumed 1.140s CPU time.
Dec  3 18:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4a6a4b44cc839f790fb1d5bf282122da1f27c13fe1645fe1798755fba21d412-merged.mount: Deactivated successfully.
Dec  3 18:15:40 compute-0 podman[282588]: 2025-12-03 18:15:40.552025105 +0000 UTC m=+1.774996973 container remove ab05c3550d1f9377b36658cbff1b7e4a94587a98eb4cab5b2618f6988237282b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:15:40 compute-0 systemd[1]: libpod-conmon-ab05c3550d1f9377b36658cbff1b7e4a94587a98eb4cab5b2618f6988237282b.scope: Deactivated successfully.
Dec  3 18:15:41 compute-0 python3.9[283094]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:41 compute-0 podman[283121]: 2025-12-03 18:15:41.457356065 +0000 UTC m=+0.065794878 container create 0640c4d407e4d0564ddbc01ba94e9b4b7f04cd2112479dbf3bda902002c06b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williamson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:15:41 compute-0 systemd[1]: Started libpod-conmon-0640c4d407e4d0564ddbc01ba94e9b4b7f04cd2112479dbf3bda902002c06b06.scope.
Dec  3 18:15:41 compute-0 podman[283121]: 2025-12-03 18:15:41.42526237 +0000 UTC m=+0.033701223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:15:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:15:41 compute-0 podman[283121]: 2025-12-03 18:15:41.571439072 +0000 UTC m=+0.179877935 container init 0640c4d407e4d0564ddbc01ba94e9b4b7f04cd2112479dbf3bda902002c06b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 18:15:41 compute-0 podman[283121]: 2025-12-03 18:15:41.588026357 +0000 UTC m=+0.196465170 container start 0640c4d407e4d0564ddbc01ba94e9b4b7f04cd2112479dbf3bda902002c06b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williamson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:15:41 compute-0 podman[283121]: 2025-12-03 18:15:41.596055198 +0000 UTC m=+0.204494051 container attach 0640c4d407e4d0564ddbc01ba94e9b4b7f04cd2112479dbf3bda902002c06b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:15:41 compute-0 eager_williamson[283156]: 167 167
Dec  3 18:15:41 compute-0 systemd[1]: libpod-0640c4d407e4d0564ddbc01ba94e9b4b7f04cd2112479dbf3bda902002c06b06.scope: Deactivated successfully.
Dec  3 18:15:41 compute-0 podman[283121]: 2025-12-03 18:15:41.600181966 +0000 UTC m=+0.208620779 container died 0640c4d407e4d0564ddbc01ba94e9b4b7f04cd2112479dbf3bda902002c06b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williamson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:15:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-906d0a7ca4cdced7ceb0bedf2cb381a8bc17eb3f369276104a09f82547950968-merged.mount: Deactivated successfully.
Dec  3 18:15:41 compute-0 podman[283121]: 2025-12-03 18:15:41.646901389 +0000 UTC m=+0.255340222 container remove 0640c4d407e4d0564ddbc01ba94e9b4b7f04cd2112479dbf3bda902002c06b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williamson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:15:41 compute-0 systemd[1]: libpod-conmon-0640c4d407e4d0564ddbc01ba94e9b4b7f04cd2112479dbf3bda902002c06b06.scope: Deactivated successfully.
Dec  3 18:15:41 compute-0 podman[283239]: 2025-12-03 18:15:41.902658479 +0000 UTC m=+0.071824111 container create 21fd8b93f25db7d807cb3b74b8d94ce9eea833d6aab9dbd5600854f2f7d86bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_northcutt, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:15:41 compute-0 systemd[1]: Started libpod-conmon-21fd8b93f25db7d807cb3b74b8d94ce9eea833d6aab9dbd5600854f2f7d86bd4.scope.
Dec  3 18:15:41 compute-0 podman[283239]: 2025-12-03 18:15:41.875232847 +0000 UTC m=+0.044398489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:15:41 compute-0 python3.9[283234]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:15:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a5f7a0cab148b3b1932c7556a3dcd2ca628093599221366df7eb4345c14e0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a5f7a0cab148b3b1932c7556a3dcd2ca628093599221366df7eb4345c14e0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a5f7a0cab148b3b1932c7556a3dcd2ca628093599221366df7eb4345c14e0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a5f7a0cab148b3b1932c7556a3dcd2ca628093599221366df7eb4345c14e0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:42 compute-0 podman[283239]: 2025-12-03 18:15:42.032923481 +0000 UTC m=+0.202089093 container init 21fd8b93f25db7d807cb3b74b8d94ce9eea833d6aab9dbd5600854f2f7d86bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:15:42 compute-0 podman[283239]: 2025-12-03 18:15:42.052239072 +0000 UTC m=+0.221404664 container start 21fd8b93f25db7d807cb3b74b8d94ce9eea833d6aab9dbd5600854f2f7d86bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:15:42 compute-0 podman[283239]: 2025-12-03 18:15:42.057038497 +0000 UTC m=+0.226204109 container attach 21fd8b93f25db7d807cb3b74b8d94ce9eea833d6aab9dbd5600854f2f7d86bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_northcutt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:15:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:42 compute-0 python3.9[283411]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]: {
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:    "0": [
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:        {
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "devices": [
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "/dev/loop3"
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            ],
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_name": "ceph_lv0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_size": "21470642176",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "name": "ceph_lv0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "tags": {
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.cluster_name": "ceph",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.crush_device_class": "",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.encrypted": "0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.osd_id": "0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.type": "block",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.vdo": "0"
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            },
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "type": "block",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "vg_name": "ceph_vg0"
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:        }
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:    ],
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:    "1": [
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:        {
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "devices": [
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "/dev/loop4"
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            ],
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_name": "ceph_lv1",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_size": "21470642176",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "name": "ceph_lv1",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "tags": {
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.cluster_name": "ceph",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.crush_device_class": "",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.encrypted": "0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.osd_id": "1",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.type": "block",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.vdo": "0"
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            },
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "type": "block",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "vg_name": "ceph_vg1"
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:        }
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:    ],
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:    "2": [
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:        {
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "devices": [
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "/dev/loop5"
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            ],
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_name": "ceph_lv2",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_size": "21470642176",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "name": "ceph_lv2",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "tags": {
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.cluster_name": "ceph",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.crush_device_class": "",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.encrypted": "0",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.osd_id": "2",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.type": "block",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:                "ceph.vdo": "0"
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            },
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "type": "block",
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:            "vg_name": "ceph_vg2"
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:        }
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]:    ]
Dec  3 18:15:42 compute-0 wizardly_northcutt[283255]: }
Dec  3 18:15:42 compute-0 systemd[1]: libpod-21fd8b93f25db7d807cb3b74b8d94ce9eea833d6aab9dbd5600854f2f7d86bd4.scope: Deactivated successfully.
Dec  3 18:15:42 compute-0 podman[283239]: 2025-12-03 18:15:42.931115262 +0000 UTC m=+1.100280854 container died 21fd8b93f25db7d807cb3b74b8d94ce9eea833d6aab9dbd5600854f2f7d86bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 18:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-51a5f7a0cab148b3b1932c7556a3dcd2ca628093599221366df7eb4345c14e0e-merged.mount: Deactivated successfully.
Dec  3 18:15:43 compute-0 podman[283239]: 2025-12-03 18:15:43.016315031 +0000 UTC m=+1.185480623 container remove 21fd8b93f25db7d807cb3b74b8d94ce9eea833d6aab9dbd5600854f2f7d86bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_northcutt, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:15:43 compute-0 systemd[1]: libpod-conmon-21fd8b93f25db7d807cb3b74b8d94ce9eea833d6aab9dbd5600854f2f7d86bd4.scope: Deactivated successfully.
Dec  3 18:15:43 compute-0 podman[283452]: 2025-12-03 18:15:43.082259542 +0000 UTC m=+0.082365793 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:15:43 compute-0 python3.9[283560]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:15:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:43 compute-0 podman[283719]: 2025-12-03 18:15:43.882289384 +0000 UTC m=+0.053074235 container create b1a2e862c3ca770097ec95cc36b69b8555f5b67eaf40c5636d21e4085e5effa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:15:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:15:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:15:43 compute-0 systemd[1]: Started libpod-conmon-b1a2e862c3ca770097ec95cc36b69b8555f5b67eaf40c5636d21e4085e5effa6.scope.
Dec  3 18:15:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:15:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:15:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:15:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:15:43 compute-0 podman[283719]: 2025-12-03 18:15:43.863705652 +0000 UTC m=+0.034490513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:15:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:15:43 compute-0 podman[283719]: 2025-12-03 18:15:43.987150641 +0000 UTC m=+0.157935522 container init b1a2e862c3ca770097ec95cc36b69b8555f5b67eaf40c5636d21e4085e5effa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:15:43 compute-0 podman[283719]: 2025-12-03 18:15:43.99675109 +0000 UTC m=+0.167535961 container start b1a2e862c3ca770097ec95cc36b69b8555f5b67eaf40c5636d21e4085e5effa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:15:44 compute-0 podman[283719]: 2025-12-03 18:15:44.002431265 +0000 UTC m=+0.173216106 container attach b1a2e862c3ca770097ec95cc36b69b8555f5b67eaf40c5636d21e4085e5effa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:15:44 compute-0 elegant_wilbur[283765]: 167 167
Dec  3 18:15:44 compute-0 systemd[1]: libpod-b1a2e862c3ca770097ec95cc36b69b8555f5b67eaf40c5636d21e4085e5effa6.scope: Deactivated successfully.
Dec  3 18:15:44 compute-0 podman[283719]: 2025-12-03 18:15:44.006132784 +0000 UTC m=+0.176917645 container died b1a2e862c3ca770097ec95cc36b69b8555f5b67eaf40c5636d21e4085e5effa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 18:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f80bb520ae5d53755323af2dda93d479a32cfe310f4f6521796c4aa0bc791d7c-merged.mount: Deactivated successfully.
Dec  3 18:15:44 compute-0 podman[283719]: 2025-12-03 18:15:44.070803404 +0000 UTC m=+0.241588235 container remove b1a2e862c3ca770097ec95cc36b69b8555f5b67eaf40c5636d21e4085e5effa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 18:15:44 compute-0 systemd[1]: libpod-conmon-b1a2e862c3ca770097ec95cc36b69b8555f5b67eaf40c5636d21e4085e5effa6.scope: Deactivated successfully.
Dec  3 18:15:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:44 compute-0 podman[283856]: 2025-12-03 18:15:44.287856083 +0000 UTC m=+0.066636699 container create 3e079d9a0376ba8f8b03cbc56cff7161dfedf70a5b550f6f17bc8eb7742a90ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tharp, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:15:44 compute-0 podman[283856]: 2025-12-03 18:15:44.260375098 +0000 UTC m=+0.039155704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:15:44 compute-0 systemd[1]: Started libpod-conmon-3e079d9a0376ba8f8b03cbc56cff7161dfedf70a5b550f6f17bc8eb7742a90ad.scope.
Dec  3 18:15:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6f67a1c191ecc798e4772930fe3c5fda322adf0f6892b4b1ac129952bad9bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6f67a1c191ecc798e4772930fe3c5fda322adf0f6892b4b1ac129952bad9bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6f67a1c191ecc798e4772930fe3c5fda322adf0f6892b4b1ac129952bad9bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6f67a1c191ecc798e4772930fe3c5fda322adf0f6892b4b1ac129952bad9bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:15:44 compute-0 podman[283856]: 2025-12-03 18:15:44.440276873 +0000 UTC m=+0.219057479 container init 3e079d9a0376ba8f8b03cbc56cff7161dfedf70a5b550f6f17bc8eb7742a90ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tharp, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:15:44 compute-0 podman[283856]: 2025-12-03 18:15:44.452088454 +0000 UTC m=+0.230869030 container start 3e079d9a0376ba8f8b03cbc56cff7161dfedf70a5b550f6f17bc8eb7742a90ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:15:44 compute-0 podman[283856]: 2025-12-03 18:15:44.464408377 +0000 UTC m=+0.243189063 container attach 3e079d9a0376ba8f8b03cbc56cff7161dfedf70a5b550f6f17bc8eb7742a90ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tharp, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:15:44 compute-0 python3.9[283866]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:15:44 compute-0 systemd[1]: Reloading.
Dec  3 18:15:44 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:15:44 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:15:45 compute-0 systemd[1]: Starting Create netns directory...
Dec  3 18:15:45 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 18:15:45 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 18:15:45 compute-0 systemd[1]: Finished Create netns directory.
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]: {
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "osd_id": 1,
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "type": "bluestore"
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:    },
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "osd_id": 2,
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "type": "bluestore"
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:    },
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "osd_id": 0,
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:        "type": "bluestore"
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]:    }
Dec  3 18:15:45 compute-0 sleepy_tharp[283876]: }
Dec  3 18:15:45 compute-0 systemd[1]: libpod-3e079d9a0376ba8f8b03cbc56cff7161dfedf70a5b550f6f17bc8eb7742a90ad.scope: Deactivated successfully.
Dec  3 18:15:45 compute-0 systemd[1]: libpod-3e079d9a0376ba8f8b03cbc56cff7161dfedf70a5b550f6f17bc8eb7742a90ad.scope: Consumed 1.055s CPU time.
Dec  3 18:15:45 compute-0 podman[283856]: 2025-12-03 18:15:45.523330465 +0000 UTC m=+1.302111101 container died 3e079d9a0376ba8f8b03cbc56cff7161dfedf70a5b550f6f17bc8eb7742a90ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b6f67a1c191ecc798e4772930fe3c5fda322adf0f6892b4b1ac129952bad9bf-merged.mount: Deactivated successfully.
Dec  3 18:15:45 compute-0 podman[283856]: 2025-12-03 18:15:45.630697242 +0000 UTC m=+1.409477828 container remove 3e079d9a0376ba8f8b03cbc56cff7161dfedf70a5b550f6f17bc8eb7742a90ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tharp, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:15:45 compute-0 systemd[1]: libpod-conmon-3e079d9a0376ba8f8b03cbc56cff7161dfedf70a5b550f6f17bc8eb7742a90ad.scope: Deactivated successfully.
Dec  3 18:15:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:15:45 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:15:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:15:45 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:15:45 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e663a8ad-9fc1-4ebb-9d96-f53f2320f403 does not exist
Dec  3 18:15:45 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 728d9021-4f79-4f17-b7ee-1841c42e35b2 does not exist
Dec  3 18:15:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:46 compute-0 python3.9[284160]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:15:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:15:47 compute-0 python3.9[284312]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:48 compute-0 python3.9[284435]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764785746.7193165-333-224284134211515/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:49 compute-0 python3.9[284587]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:15:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:51 compute-0 python3.9[284740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:15:52 compute-0 python3.9[284863]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764785750.497632-358-216121287180108/.source.json _original_basename=.p7cwq6vf follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:15:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:53 compute-0 python3.9[285015]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:15:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:56 compute-0 python3.9[285442]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  3 18:15:57 compute-0 python3.9[285594]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 18:15:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:15:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:15:59 compute-0 python3.9[285746]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  3 18:15:59 compute-0 podman[158200]: time="2025-12-03T18:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:15:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Dec  3 18:15:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6835 "" "Go-http-client/1.1"
Dec  3 18:16:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:01 compute-0 openstack_network_exporter[160319]: ERROR   18:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:16:01 compute-0 openstack_network_exporter[160319]: ERROR   18:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:16:01 compute-0 openstack_network_exporter[160319]: ERROR   18:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:16:01 compute-0 openstack_network_exporter[160319]: ERROR   18:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:16:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:16:01 compute-0 openstack_network_exporter[160319]: ERROR   18:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:16:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:16:01 compute-0 python3[285923]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 18:16:01 compute-0 podman[285949]: 2025-12-03 18:16:01.948565491 +0000 UTC m=+0.099351197 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Dec  3 18:16:01 compute-0 podman[285968]: 2025-12-03 18:16:01.955409913 +0000 UTC m=+0.081751008 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, distribution-scope=public, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, release-0.7.12=, vcs-type=git, container_name=kepler)
Dec  3 18:16:01 compute-0 podman[285948]: 2025-12-03 18:16:01.957759189 +0000 UTC m=+0.113107225 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 18:16:01 compute-0 podman[285961]: 2025-12-03 18:16:01.965109804 +0000 UTC m=+0.105828001 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:16:01 compute-0 podman[285951]: 2025-12-03 18:16:01.970731168 +0000 UTC m=+0.111441495 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec  3 18:16:01 compute-0 podman[285950]: 2025-12-03 18:16:01.981838562 +0000 UTC m=+0.135108898 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:16:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:16:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:16:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Dec  3 18:16:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Dec  3 18:16:13 compute-0 podman[285935]: 2025-12-03 18:16:13.181318313 +0000 UTC m=+11.537181204 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:16:13 compute-0 podman[286159]: 2025-12-03 18:16:13.390077455 +0000 UTC m=+0.057722156 container create eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:16:13 compute-0 podman[286159]: 2025-12-03 18:16:13.359190879 +0000 UTC m=+0.026835600 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:16:13 compute-0 python3[285923]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:16:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:16:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:16:13
Dec  3 18:16:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:16:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:16:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'backups', 'images', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'volumes']
Dec  3 18:16:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:16:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:16:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:16:13 compute-0 podman[286221]: 2025-12-03 18:16:13.920687951 +0000 UTC m=+0.087271299 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:16:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:16:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:16:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:16:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:16:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:16:14 compute-0 python3.9[286367]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:16:15 compute-0 python3.9[286521]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:15 compute-0 python3.9[286597]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:16:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:16:16 compute-0 python3.9[286748]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764785776.1191678-446-207655743741804/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:17 compute-0 python3.9[286824]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 18:16:17 compute-0 systemd[1]: Reloading.
Dec  3 18:16:17 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:16:17 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:16:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:16:19 compute-0 python3.9[286936]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:16:19 compute-0 systemd[1]: Reloading.
Dec  3 18:16:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:16:19 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:16:19 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:16:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:16:20 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec  3 18:16:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d49bd33e304cfa171f6c56d838ead4f70fe1b610bcf3aa9f5e94a53d99ab29/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d49bd33e304cfa171f6c56d838ead4f70fe1b610bcf3aa9f5e94a53d99ab29/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:20 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b.
Dec  3 18:16:20 compute-0 podman[286980]: 2025-12-03 18:16:20.594295231 +0000 UTC m=+0.378322751 container init eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: + sudo -E kolla_set_configs
Dec  3 18:16:20 compute-0 podman[286980]: 2025-12-03 18:16:20.63876355 +0000 UTC m=+0.422791030 container start eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true)
Dec  3 18:16:20 compute-0 edpm-start-podman-container[286980]: ovn_metadata_agent
Dec  3 18:16:20 compute-0 podman[287001]: 2025-12-03 18:16:20.721962592 +0000 UTC m=+0.071417752 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  3 18:16:20 compute-0 edpm-start-podman-container[286979]: Creating additional drop-in dependency for "ovn_metadata_agent" (eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b)
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Validating config file
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Copying service configuration files
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Writing out command to execute
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  3 18:16:20 compute-0 systemd[1]: Reloading.
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: ++ cat /run_command
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: + CMD=neutron-ovn-metadata-agent
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: + ARGS=
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: + sudo kolla_copy_cacerts
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: + [[ ! -n '' ]]
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: + . kolla_extend_start
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: Running command: 'neutron-ovn-metadata-agent'
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: + umask 0022
Dec  3 18:16:20 compute-0 ovn_metadata_agent[286994]: + exec neutron-ovn-metadata-agent
Dec  3 18:16:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:16:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:16:21 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec  3 18:16:21 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Dec  3 18:16:21 compute-0 systemd[1]: session-54.scope: Consumed 1min 25.257s CPU time.
Dec  3 18:16:21 compute-0 systemd-logind[784]: Session 54 logged out. Waiting for processes to exit.
Dec  3 18:16:21 compute-0 systemd-logind[784]: Removed session 54.
Dec  3 18:16:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.231 286999 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.233 286999 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.233 286999 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.234 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.234 286999 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.235 286999 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.235 286999 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.235 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.236 286999 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.236 286999 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.236 286999 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.237 286999 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.237 286999 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.237 286999 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.238 286999 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.238 286999 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.238 286999 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.239 286999 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.239 286999 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.239 286999 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.239 286999 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.240 286999 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.240 286999 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.240 286999 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.241 286999 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.241 286999 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.241 286999 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.242 286999 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.242 286999 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.242 286999 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.243 286999 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.243 286999 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.243 286999 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.244 286999 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.244 286999 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.244 286999 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.244 286999 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.245 286999 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.245 286999 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.245 286999 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.246 286999 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.246 286999 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.246 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.247 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.247 286999 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.247 286999 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.248 286999 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.248 286999 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.248 286999 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.248 286999 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.249 286999 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.249 286999 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.249 286999 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.250 286999 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.250 286999 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.250 286999 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.251 286999 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.251 286999 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.251 286999 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.252 286999 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.252 286999 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.252 286999 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.252 286999 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.253 286999 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.253 286999 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.253 286999 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.254 286999 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.254 286999 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.254 286999 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.255 286999 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.255 286999 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.255 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.256 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.256 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.256 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.256 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.257 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.257 286999 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.257 286999 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.258 286999 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.258 286999 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.258 286999 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.259 286999 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.259 286999 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.259 286999 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.259 286999 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.260 286999 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.260 286999 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.260 286999 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.261 286999 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.261 286999 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.261 286999 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.261 286999 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.262 286999 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.262 286999 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.263 286999 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.263 286999 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.263 286999 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.264 286999 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.264 286999 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.264 286999 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.264 286999 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.264 286999 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.264 286999 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.264 286999 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.264 286999 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.265 286999 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.265 286999 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.265 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.265 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.265 286999 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.265 286999 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.265 286999 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.266 286999 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.266 286999 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.266 286999 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.266 286999 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.266 286999 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.266 286999 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.266 286999 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.266 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.267 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.267 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.267 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.267 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.267 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.268 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.268 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.268 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.268 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.268 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.268 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.269 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.269 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.269 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.269 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.269 286999 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.269 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.270 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.270 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.270 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.270 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.270 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.270 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.270 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.271 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.271 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.271 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.271 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.271 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.271 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.271 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.272 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.272 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.272 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.272 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.272 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.272 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.273 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.273 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.273 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.273 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.273 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.273 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.274 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.274 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.274 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.274 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.274 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.274 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.274 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.275 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.275 286999 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.275 286999 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.275 286999 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.275 286999 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.275 286999 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.275 286999 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.276 286999 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.276 286999 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.276 286999 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.276 286999 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.276 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.276 286999 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.276 286999 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.276 286999 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.277 286999 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.277 286999 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.277 286999 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.277 286999 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.277 286999 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.277 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.277 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.277 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.278 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.278 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.278 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.278 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.278 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.278 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.278 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.278 286999 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.279 286999 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.279 286999 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.279 286999 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.279 286999 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.279 286999 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.279 286999 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.279 286999 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.279 286999 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.280 286999 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.280 286999 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.280 286999 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.280 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.280 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.280 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.280 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.281 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.281 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.281 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.281 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.281 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.281 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.282 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.282 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.282 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.282 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.282 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.282 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.282 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.282 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.283 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.283 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.283 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.283 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.283 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.283 286999 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.283 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.284 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.284 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.284 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.284 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.284 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.284 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.285 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.285 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.285 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.285 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.285 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.286 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.286 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.286 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.286 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.286 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.286 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.287 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.287 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.287 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.287 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.287 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.287 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.288 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.288 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.288 286999 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.288 286999 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.288 286999 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.288 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.289 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.289 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.289 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.289 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.289 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.289 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.290 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.290 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.290 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.290 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.290 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.290 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.291 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.291 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.291 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.291 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.291 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.292 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.292 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.292 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.292 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.292 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.292 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.293 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.293 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.293 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.293 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.293 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.294 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.294 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.294 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.294 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.294 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.294 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.295 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.295 286999 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.295 286999 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.306 286999 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.306 286999 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.306 286999 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.307 286999 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.307 286999 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.321 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 1ac9fd0d-196b-4ea8-9a9a-8aa831092805 (UUID: 1ac9fd0d-196b-4ea8-9a9a-8aa831092805) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.349 286999 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.350 286999 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.350 286999 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.350 286999 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.354 286999 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.361 286999 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.372 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '1ac9fd0d-196b-4ea8-9a9a-8aa831092805'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], external_ids={}, name=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, nb_cfg_timestamp=1764784272055, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.374 286999 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f81e3e96ee0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.375 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.375 286999 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.375 286999 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.376 286999 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.381 286999 DEBUG oslo_service.service [-] Started child 287105 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.385 286999 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpopg6plpg/privsep.sock']#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.388 287105 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-8362694'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.416 287105 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.416 287105 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.416 287105 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.420 287105 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.427 287105 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  3 18:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.434 287105 INFO eventlet.wsgi.server [-] (287105) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:16:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:16:24 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:24.022 286999 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 18:16:24 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:24.023 286999 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpopg6plpg/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  3 18:16:24 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.891 287110 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 18:16:24 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.901 287110 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 18:16:24 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.905 287110 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  3 18:16:24 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:23.905 287110 INFO oslo.privsep.daemon [-] privsep daemon running as pid 287110#033[00m
Dec  3 18:16:24 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:24.025 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[235e2947-b4d8-429b-9402-960a41728dba]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:16:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Dec  3 18:16:24 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:24.523 287110 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:16:24 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:24.523 287110 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:16:24 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:24.523 287110 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:16:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.080 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[12140c84-b290-4bb6-8fdf-2a1d91402783]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.084 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, column=external_ids, values=({'neutron:ovn-metadata-id': 'c9036f91-163d-5938-bfb1-29d543ff3a9a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.144 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.184 286999 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.185 286999 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.185 286999 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.185 286999 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.186 286999 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.186 286999 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.186 286999 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.187 286999 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.187 286999 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.188 286999 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.188 286999 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.188 286999 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.189 286999 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.189 286999 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.189 286999 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.190 286999 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.191 286999 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.191 286999 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.191 286999 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.192 286999 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.192 286999 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.192 286999 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.193 286999 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.193 286999 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.193 286999 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.194 286999 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.195 286999 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.195 286999 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.195 286999 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.196 286999 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.196 286999 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.196 286999 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.197 286999 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.197 286999 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.198 286999 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.198 286999 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.198 286999 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.199 286999 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.199 286999 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.200 286999 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.200 286999 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.200 286999 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.201 286999 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.201 286999 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.201 286999 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.202 286999 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.202 286999 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.203 286999 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.203 286999 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.203 286999 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.204 286999 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.204 286999 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.204 286999 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.205 286999 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.205 286999 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.205 286999 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.206 286999 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.206 286999 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.207 286999 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.207 286999 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.207 286999 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.208 286999 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.208 286999 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.208 286999 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.209 286999 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.209 286999 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.209 286999 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.210 286999 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.210 286999 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.211 286999 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.211 286999 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.211 286999 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.212 286999 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.212 286999 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.212 286999 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.213 286999 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.213 286999 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.214 286999 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.214 286999 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.214 286999 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.215 286999 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.215 286999 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.215 286999 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.216 286999 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.216 286999 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.216 286999 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.216 286999 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.216 286999 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.216 286999 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.217 286999 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.217 286999 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.217 286999 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.217 286999 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.217 286999 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.218 286999 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.218 286999 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.218 286999 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.218 286999 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.219 286999 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.219 286999 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.219 286999 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.219 286999 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.219 286999 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.219 286999 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.220 286999 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.220 286999 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.220 286999 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.220 286999 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.220 286999 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.221 286999 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.221 286999 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.221 286999 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.221 286999 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.221 286999 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.222 286999 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.222 286999 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.222 286999 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.222 286999 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.222 286999 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.223 286999 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.223 286999 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.223 286999 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.223 286999 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.223 286999 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.224 286999 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.224 286999 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.224 286999 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.224 286999 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.225 286999 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.225 286999 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.225 286999 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.225 286999 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.225 286999 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.226 286999 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.226 286999 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.226 286999 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.226 286999 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.227 286999 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.227 286999 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.227 286999 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.227 286999 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.228 286999 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.228 286999 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.228 286999 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.228 286999 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.228 286999 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.228 286999 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.229 286999 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.229 286999 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.229 286999 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.229 286999 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.229 286999 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.229 286999 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.230 286999 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.230 286999 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.230 286999 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.230 286999 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.230 286999 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.230 286999 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.231 286999 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.231 286999 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.231 286999 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.231 286999 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.231 286999 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.231 286999 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.232 286999 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.232 286999 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.232 286999 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.232 286999 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.232 286999 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.232 286999 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.233 286999 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.233 286999 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.233 286999 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.233 286999 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.233 286999 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.234 286999 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.234 286999 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.234 286999 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.234 286999 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.234 286999 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.235 286999 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.235 286999 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.235 286999 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.235 286999 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.235 286999 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.236 286999 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.236 286999 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.236 286999 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.236 286999 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.236 286999 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.236 286999 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.237 286999 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.237 286999 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.237 286999 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.237 286999 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.237 286999 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.238 286999 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.238 286999 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.238 286999 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.238 286999 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.238 286999 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.239 286999 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.239 286999 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.239 286999 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.239 286999 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.239 286999 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.239 286999 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.240 286999 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.240 286999 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.240 286999 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.240 286999 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.240 286999 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.240 286999 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.241 286999 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.241 286999 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.241 286999 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.241 286999 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.241 286999 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.241 286999 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.242 286999 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.242 286999 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.242 286999 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.242 286999 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.242 286999 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.243 286999 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.243 286999 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.243 286999 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.243 286999 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.243 286999 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.243 286999 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.244 286999 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.244 286999 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.244 286999 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.244 286999 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.244 286999 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.244 286999 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.245 286999 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.245 286999 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.245 286999 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.245 286999 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.245 286999 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.245 286999 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.246 286999 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.246 286999 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.246 286999 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.246 286999 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.246 286999 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.247 286999 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.247 286999 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.247 286999 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.247 286999 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.247 286999 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.247 286999 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.248 286999 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.248 286999 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.248 286999 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.248 286999 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.248 286999 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.249 286999 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.249 286999 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.249 286999 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.249 286999 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.249 286999 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.249 286999 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.250 286999 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.250 286999 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.250 286999 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.250 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.251 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.251 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.251 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.251 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.251 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.251 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.252 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.252 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.252 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.252 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.252 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.253 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.253 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.253 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.253 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.253 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.254 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.254 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.254 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.254 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.254 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.255 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.255 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.255 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.255 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.255 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.255 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.256 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.256 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.256 286999 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.256 286999 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.256 286999 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.256 286999 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.257 286999 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:16:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:16:25.257 286999 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 18:16:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:26 compute-0 systemd-logind[784]: New session 55 of user zuul.
Dec  3 18:16:26 compute-0 systemd[1]: Started Session 55 of User zuul.
Dec  3 18:16:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:28 compute-0 python3.9[287268]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:16:29 compute-0 python3.9[287424]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:16:29 compute-0 podman[158200]: time="2025-12-03T18:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:16:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Dec  3 18:16:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7284 "" "Go-http-client/1.1"
Dec  3 18:16:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:16:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:31 compute-0 python3.9[287586]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 18:16:31 compute-0 systemd[1]: Reloading.
Dec  3 18:16:31 compute-0 openstack_network_exporter[160319]: ERROR   18:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:16:31 compute-0 openstack_network_exporter[160319]: ERROR   18:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:16:31 compute-0 openstack_network_exporter[160319]: ERROR   18:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:16:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:16:31 compute-0 openstack_network_exporter[160319]: ERROR   18:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:16:31 compute-0 openstack_network_exporter[160319]: ERROR   18:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:16:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:16:31 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:16:31 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:16:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:32 compute-0 podman[287749]: 2025-12-03 18:16:32.747854965 +0000 UTC m=+0.099570742 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, version=9.6)
Dec  3 18:16:32 compute-0 podman[287756]: 2025-12-03 18:16:32.763815155 +0000 UTC m=+0.125604262 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:16:32 compute-0 podman[287745]: 2025-12-03 18:16:32.769823898 +0000 UTC m=+0.126085584 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 18:16:32 compute-0 podman[287752]: 2025-12-03 18:16:32.770436713 +0000 UTC m=+0.132026695 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:16:32 compute-0 podman[287758]: 2025-12-03 18:16:32.772583154 +0000 UTC m=+0.123172495 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, distribution-scope=public)
Dec  3 18:16:32 compute-0 podman[287754]: 2025-12-03 18:16:32.774198783 +0000 UTC m=+0.131718358 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  3 18:16:32 compute-0 python3.9[287813]: ansible-ansible.builtin.service_facts Invoked
Dec  3 18:16:32 compute-0 network[287904]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 18:16:32 compute-0 network[287905]: 'network-scripts' will be removed from distribution in near future.
Dec  3 18:16:32 compute-0 network[287906]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 18:16:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:16:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:38 compute-0 python3.9[288177]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:16:39 compute-0 python3.9[288330]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:16:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:16:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:40 compute-0 python3.9[288483]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:16:41 compute-0 python3.9[288636]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:16:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:42 compute-0 python3.9[288789]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:16:43 compute-0 python3.9[288942]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:16:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:16:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:16:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:16:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:16:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:16:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:16:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:44 compute-0 podman[289067]: 2025-12-03 18:16:44.431105279 +0000 UTC m=+0.087979476 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:16:44 compute-0 python3.9[289117]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:16:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:16:46 compute-0 python3.9[289270]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:46 compute-0 python3.9[289543]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:16:47 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:16:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:16:47 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:16:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:16:47 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:16:47 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 84467a1d-2023-4692-92fa-347772fa842e does not exist
Dec  3 18:16:47 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev dd574587-9669-4059-b72e-3b2822f9df99 does not exist
Dec  3 18:16:47 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3c130091-9f3d-4ca8-9f9e-c7ba559e6c7c does not exist
Dec  3 18:16:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:16:47 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:16:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:16:47 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:16:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:16:47 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:16:47 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:16:47 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:16:47 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:16:47 compute-0 python3.9[289809]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:48 compute-0 podman[289863]: 2025-12-03 18:16:48.069415094 +0000 UTC m=+0.066865003 container create 6b4c44afcbb729afee254e36cb318cc5c3171f19babbe503255b83ca8612ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poincare, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:16:48 compute-0 systemd[1]: Started libpod-conmon-6b4c44afcbb729afee254e36cb318cc5c3171f19babbe503255b83ca8612ff33.scope.
Dec  3 18:16:48 compute-0 podman[289863]: 2025-12-03 18:16:48.044790948 +0000 UTC m=+0.042240857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:16:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:16:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:48 compute-0 podman[289863]: 2025-12-03 18:16:48.181979395 +0000 UTC m=+0.179429334 container init 6b4c44afcbb729afee254e36cb318cc5c3171f19babbe503255b83ca8612ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poincare, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:16:48 compute-0 podman[289863]: 2025-12-03 18:16:48.195236981 +0000 UTC m=+0.192686890 container start 6b4c44afcbb729afee254e36cb318cc5c3171f19babbe503255b83ca8612ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poincare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:16:48 compute-0 podman[289863]: 2025-12-03 18:16:48.199913462 +0000 UTC m=+0.197363381 container attach 6b4c44afcbb729afee254e36cb318cc5c3171f19babbe503255b83ca8612ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poincare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:16:48 compute-0 frosty_poincare[289912]: 167 167
Dec  3 18:16:48 compute-0 systemd[1]: libpod-6b4c44afcbb729afee254e36cb318cc5c3171f19babbe503255b83ca8612ff33.scope: Deactivated successfully.
Dec  3 18:16:48 compute-0 podman[289863]: 2025-12-03 18:16:48.204302537 +0000 UTC m=+0.201752446 container died 6b4c44afcbb729afee254e36cb318cc5c3171f19babbe503255b83ca8612ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c12fa2ae85d89ccb38ab5f1107416de12b2c3f4f1992c8caba2fca95aa3aaad1-merged.mount: Deactivated successfully.
Dec  3 18:16:48 compute-0 podman[289863]: 2025-12-03 18:16:48.265282239 +0000 UTC m=+0.262732148 container remove 6b4c44afcbb729afee254e36cb318cc5c3171f19babbe503255b83ca8612ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:16:48 compute-0 systemd[1]: libpod-conmon-6b4c44afcbb729afee254e36cb318cc5c3171f19babbe503255b83ca8612ff33.scope: Deactivated successfully.
Dec  3 18:16:48 compute-0 podman[290007]: 2025-12-03 18:16:48.479687895 +0000 UTC m=+0.076287779 container create f1ddf849944c532e0777459d5ea28321aaa50ea67b6cd1584e7c665f8853f02a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:16:48 compute-0 podman[290007]: 2025-12-03 18:16:48.446565236 +0000 UTC m=+0.043165220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:16:48 compute-0 systemd[1]: Started libpod-conmon-f1ddf849944c532e0777459d5ea28321aaa50ea67b6cd1584e7c665f8853f02a.scope.
Dec  3 18:16:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddbf58bc81e3220f647961762b80980b4170bc6840dd760c1a9270e9055b943/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddbf58bc81e3220f647961762b80980b4170bc6840dd760c1a9270e9055b943/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddbf58bc81e3220f647961762b80980b4170bc6840dd760c1a9270e9055b943/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddbf58bc81e3220f647961762b80980b4170bc6840dd760c1a9270e9055b943/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddbf58bc81e3220f647961762b80980b4170bc6840dd760c1a9270e9055b943/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:48 compute-0 podman[290007]: 2025-12-03 18:16:48.601361342 +0000 UTC m=+0.197961246 container init f1ddf849944c532e0777459d5ea28321aaa50ea67b6cd1584e7c665f8853f02a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:16:48 compute-0 podman[290007]: 2025-12-03 18:16:48.612294732 +0000 UTC m=+0.208894616 container start f1ddf849944c532e0777459d5ea28321aaa50ea67b6cd1584e7c665f8853f02a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:16:48 compute-0 podman[290007]: 2025-12-03 18:16:48.616568633 +0000 UTC m=+0.213168537 container attach f1ddf849944c532e0777459d5ea28321aaa50ea67b6cd1584e7c665f8853f02a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:16:48 compute-0 python3.9[290057]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:49 compute-0 python3.9[290218]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:49 compute-0 stupefied_lamarr[290053]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:16:49 compute-0 stupefied_lamarr[290053]: --> relative data size: 1.0
Dec  3 18:16:49 compute-0 stupefied_lamarr[290053]: --> All data devices are unavailable
Dec  3 18:16:49 compute-0 systemd[1]: libpod-f1ddf849944c532e0777459d5ea28321aaa50ea67b6cd1584e7c665f8853f02a.scope: Deactivated successfully.
Dec  3 18:16:49 compute-0 systemd[1]: libpod-f1ddf849944c532e0777459d5ea28321aaa50ea67b6cd1584e7c665f8853f02a.scope: Consumed 1.159s CPU time.
Dec  3 18:16:49 compute-0 podman[290007]: 2025-12-03 18:16:49.849558567 +0000 UTC m=+1.446158461 container died f1ddf849944c532e0777459d5ea28321aaa50ea67b6cd1584e7c665f8853f02a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 18:16:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ddbf58bc81e3220f647961762b80980b4170bc6840dd760c1a9270e9055b943-merged.mount: Deactivated successfully.
Dec  3 18:16:49 compute-0 podman[290007]: 2025-12-03 18:16:49.944677332 +0000 UTC m=+1.541277216 container remove f1ddf849944c532e0777459d5ea28321aaa50ea67b6cd1584e7c665f8853f02a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:16:49 compute-0 systemd[1]: libpod-conmon-f1ddf849944c532e0777459d5ea28321aaa50ea67b6cd1584e7c665f8853f02a.scope: Deactivated successfully.
Dec  3 18:16:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:50 compute-0 python3.9[290480]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:50 compute-0 podman[290589]: 2025-12-03 18:16:50.809917487 +0000 UTC m=+0.063838611 container create 75377ce32fc1004d767ae78ecb393e5ed26bd57609b1685d14c43851353d0f52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:16:50 compute-0 systemd[1]: Started libpod-conmon-75377ce32fc1004d767ae78ecb393e5ed26bd57609b1685d14c43851353d0f52.scope.
Dec  3 18:16:50 compute-0 podman[290589]: 2025-12-03 18:16:50.786143451 +0000 UTC m=+0.040064575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:16:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:16:50 compute-0 podman[290589]: 2025-12-03 18:16:50.93263802 +0000 UTC m=+0.186559184 container init 75377ce32fc1004d767ae78ecb393e5ed26bd57609b1685d14c43851353d0f52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:16:50 compute-0 podman[290589]: 2025-12-03 18:16:50.943932699 +0000 UTC m=+0.197853843 container start 75377ce32fc1004d767ae78ecb393e5ed26bd57609b1685d14c43851353d0f52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:16:50 compute-0 podman[290589]: 2025-12-03 18:16:50.949382949 +0000 UTC m=+0.203304113 container attach 75377ce32fc1004d767ae78ecb393e5ed26bd57609b1685d14c43851353d0f52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:16:50 compute-0 keen_gagarin[290657]: 167 167
Dec  3 18:16:50 compute-0 systemd[1]: libpod-75377ce32fc1004d767ae78ecb393e5ed26bd57609b1685d14c43851353d0f52.scope: Deactivated successfully.
Dec  3 18:16:50 compute-0 podman[290589]: 2025-12-03 18:16:50.950955506 +0000 UTC m=+0.204876630 container died 75377ce32fc1004d767ae78ecb393e5ed26bd57609b1685d14c43851353d0f52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:16:50 compute-0 podman[290626]: 2025-12-03 18:16:50.96162825 +0000 UTC m=+0.129019473 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Dec  3 18:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fdc8054c59865299b972f544b77e347820844ff446a8224ef1fa9c71585ee60-merged.mount: Deactivated successfully.
Dec  3 18:16:51 compute-0 podman[290589]: 2025-12-03 18:16:51.007022562 +0000 UTC m=+0.260943686 container remove 75377ce32fc1004d767ae78ecb393e5ed26bd57609b1685d14c43851353d0f52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:16:51 compute-0 systemd[1]: libpod-conmon-75377ce32fc1004d767ae78ecb393e5ed26bd57609b1685d14c43851353d0f52.scope: Deactivated successfully.
Dec  3 18:16:51 compute-0 podman[290745]: 2025-12-03 18:16:51.197652411 +0000 UTC m=+0.063368429 container create ee3173c12c918a74a0b91ffc638001851846f0127f1ca3efdedabc6fbc819da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noether, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:16:51 compute-0 systemd[1]: Started libpod-conmon-ee3173c12c918a74a0b91ffc638001851846f0127f1ca3efdedabc6fbc819da2.scope.
Dec  3 18:16:51 compute-0 podman[290745]: 2025-12-03 18:16:51.170121165 +0000 UTC m=+0.035837223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:16:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11908cde01b45c904da8f8450bfa3e375e1832bcbc9d6c029690913132b301fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11908cde01b45c904da8f8450bfa3e375e1832bcbc9d6c029690913132b301fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11908cde01b45c904da8f8450bfa3e375e1832bcbc9d6c029690913132b301fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11908cde01b45c904da8f8450bfa3e375e1832bcbc9d6c029690913132b301fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:51 compute-0 python3.9[290747]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:51 compute-0 podman[290745]: 2025-12-03 18:16:51.316239766 +0000 UTC m=+0.181955844 container init ee3173c12c918a74a0b91ffc638001851846f0127f1ca3efdedabc6fbc819da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 18:16:51 compute-0 podman[290745]: 2025-12-03 18:16:51.328583409 +0000 UTC m=+0.194299407 container start ee3173c12c918a74a0b91ffc638001851846f0127f1ca3efdedabc6fbc819da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:16:51 compute-0 podman[290745]: 2025-12-03 18:16:51.333715771 +0000 UTC m=+0.199431809 container attach ee3173c12c918a74a0b91ffc638001851846f0127f1ca3efdedabc6fbc819da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noether, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:16:52 compute-0 vibrant_noether[290762]: {
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:    "0": [
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:        {
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "devices": [
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "/dev/loop3"
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            ],
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_name": "ceph_lv0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_size": "21470642176",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "name": "ceph_lv0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "tags": {
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.cluster_name": "ceph",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.crush_device_class": "",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.encrypted": "0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.osd_id": "0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.type": "block",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.vdo": "0"
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            },
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "type": "block",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "vg_name": "ceph_vg0"
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:        }
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:    ],
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:    "1": [
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:        {
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "devices": [
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "/dev/loop4"
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            ],
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_name": "ceph_lv1",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_size": "21470642176",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "name": "ceph_lv1",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "tags": {
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.cluster_name": "ceph",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.crush_device_class": "",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.encrypted": "0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.osd_id": "1",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.type": "block",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.vdo": "0"
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            },
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "type": "block",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "vg_name": "ceph_vg1"
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:        }
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:    ],
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:    "2": [
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:        {
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "devices": [
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "/dev/loop5"
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            ],
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_name": "ceph_lv2",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_size": "21470642176",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "name": "ceph_lv2",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "tags": {
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.cluster_name": "ceph",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.crush_device_class": "",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.encrypted": "0",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.osd_id": "2",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.type": "block",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:                "ceph.vdo": "0"
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            },
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "type": "block",
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:            "vg_name": "ceph_vg2"
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:        }
Dec  3 18:16:52 compute-0 vibrant_noether[290762]:    ]
Dec  3 18:16:52 compute-0 vibrant_noether[290762]: }
Dec  3 18:16:52 compute-0 systemd[1]: libpod-ee3173c12c918a74a0b91ffc638001851846f0127f1ca3efdedabc6fbc819da2.scope: Deactivated successfully.
Dec  3 18:16:52 compute-0 podman[290745]: 2025-12-03 18:16:52.155946952 +0000 UTC m=+1.021662980 container died ee3173c12c918a74a0b91ffc638001851846f0127f1ca3efdedabc6fbc819da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noether, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 18:16:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-11908cde01b45c904da8f8450bfa3e375e1832bcbc9d6c029690913132b301fb-merged.mount: Deactivated successfully.
Dec  3 18:16:52 compute-0 python3.9[290920]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:52 compute-0 podman[290745]: 2025-12-03 18:16:52.247574185 +0000 UTC m=+1.113290183 container remove ee3173c12c918a74a0b91ffc638001851846f0127f1ca3efdedabc6fbc819da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noether, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:16:52 compute-0 systemd[1]: libpod-conmon-ee3173c12c918a74a0b91ffc638001851846f0127f1ca3efdedabc6fbc819da2.scope: Deactivated successfully.
Dec  3 18:16:53 compute-0 python3.9[291196]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:53 compute-0 podman[291226]: 2025-12-03 18:16:53.142916407 +0000 UTC m=+0.047439831 container create d1d5b4db87156e066fffb9345818d34f5fdfaf855128842e3c8e21fab8c02363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:16:53 compute-0 systemd[1]: Started libpod-conmon-d1d5b4db87156e066fffb9345818d34f5fdfaf855128842e3c8e21fab8c02363.scope.
Dec  3 18:16:53 compute-0 podman[291226]: 2025-12-03 18:16:53.122226814 +0000 UTC m=+0.026750258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:16:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:16:53 compute-0 podman[291226]: 2025-12-03 18:16:53.244279761 +0000 UTC m=+0.148803235 container init d1d5b4db87156e066fffb9345818d34f5fdfaf855128842e3c8e21fab8c02363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:16:53 compute-0 podman[291226]: 2025-12-03 18:16:53.253837508 +0000 UTC m=+0.158360932 container start d1d5b4db87156e066fffb9345818d34f5fdfaf855128842e3c8e21fab8c02363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:16:53 compute-0 podman[291226]: 2025-12-03 18:16:53.258809047 +0000 UTC m=+0.163332471 container attach d1d5b4db87156e066fffb9345818d34f5fdfaf855128842e3c8e21fab8c02363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:16:53 compute-0 intelligent_satoshi[291273]: 167 167
Dec  3 18:16:53 compute-0 systemd[1]: libpod-d1d5b4db87156e066fffb9345818d34f5fdfaf855128842e3c8e21fab8c02363.scope: Deactivated successfully.
Dec  3 18:16:53 compute-0 podman[291299]: 2025-12-03 18:16:53.310350135 +0000 UTC m=+0.031656486 container died d1d5b4db87156e066fffb9345818d34f5fdfaf855128842e3c8e21fab8c02363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:16:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1c027225bde883afc25429e7eef41c552157808f67893e509a3ce68a03b0a4a-merged.mount: Deactivated successfully.
Dec  3 18:16:53 compute-0 podman[291299]: 2025-12-03 18:16:53.373958849 +0000 UTC m=+0.095265200 container remove d1d5b4db87156e066fffb9345818d34f5fdfaf855128842e3c8e21fab8c02363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:16:53 compute-0 systemd[1]: libpod-conmon-d1d5b4db87156e066fffb9345818d34f5fdfaf855128842e3c8e21fab8c02363.scope: Deactivated successfully.
Dec  3 18:16:53 compute-0 podman[291388]: 2025-12-03 18:16:53.573783538 +0000 UTC m=+0.058618657 container create 5d5170b2d523673a16e4779a87ad138dd2afd4c272dd11b83c089d26c49c6978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_agnesi, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:16:53 compute-0 podman[291388]: 2025-12-03 18:16:53.553278999 +0000 UTC m=+0.038114148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:16:53 compute-0 systemd[1]: Started libpod-conmon-5d5170b2d523673a16e4779a87ad138dd2afd4c272dd11b83c089d26c49c6978.scope.
Dec  3 18:16:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caa22c838de126e10b845653e9d3dd8103590066b622686fbd63407969a7caff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caa22c838de126e10b845653e9d3dd8103590066b622686fbd63407969a7caff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caa22c838de126e10b845653e9d3dd8103590066b622686fbd63407969a7caff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caa22c838de126e10b845653e9d3dd8103590066b622686fbd63407969a7caff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:16:53 compute-0 podman[291388]: 2025-12-03 18:16:53.723241088 +0000 UTC m=+0.208076217 container init 5d5170b2d523673a16e4779a87ad138dd2afd4c272dd11b83c089d26c49c6978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_agnesi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 18:16:53 compute-0 podman[291388]: 2025-12-03 18:16:53.737873636 +0000 UTC m=+0.222708735 container start 5d5170b2d523673a16e4779a87ad138dd2afd4c272dd11b83c089d26c49c6978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_agnesi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:16:53 compute-0 podman[291388]: 2025-12-03 18:16:53.741960083 +0000 UTC m=+0.226795182 container attach 5d5170b2d523673a16e4779a87ad138dd2afd4c272dd11b83c089d26c49c6978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_agnesi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:16:53 compute-0 python3.9[291433]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:54 compute-0 python3.9[291594]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]: {
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "osd_id": 1,
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "type": "bluestore"
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:    },
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "osd_id": 2,
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "type": "bluestore"
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:    },
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "osd_id": 0,
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:        "type": "bluestore"
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]:    }
Dec  3 18:16:54 compute-0 dreamy_agnesi[291434]: }
Dec  3 18:16:54 compute-0 systemd[1]: libpod-5d5170b2d523673a16e4779a87ad138dd2afd4c272dd11b83c089d26c49c6978.scope: Deactivated successfully.
Dec  3 18:16:54 compute-0 systemd[1]: libpod-5d5170b2d523673a16e4779a87ad138dd2afd4c272dd11b83c089d26c49c6978.scope: Consumed 1.125s CPU time.
Dec  3 18:16:54 compute-0 podman[291388]: 2025-12-03 18:16:54.869010524 +0000 UTC m=+1.353845633 container died 5d5170b2d523673a16e4779a87ad138dd2afd4c272dd11b83c089d26c49c6978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_agnesi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:16:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:16:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-caa22c838de126e10b845653e9d3dd8103590066b622686fbd63407969a7caff-merged.mount: Deactivated successfully.
Dec  3 18:16:54 compute-0 podman[291388]: 2025-12-03 18:16:54.961838664 +0000 UTC m=+1.446673773 container remove 5d5170b2d523673a16e4779a87ad138dd2afd4c272dd11b83c089d26c49c6978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_agnesi, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:16:54 compute-0 systemd[1]: libpod-conmon-5d5170b2d523673a16e4779a87ad138dd2afd4c272dd11b83c089d26c49c6978.scope: Deactivated successfully.
Dec  3 18:16:55 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:16:55 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:16:55 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:16:55 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:16:55 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e8d52015-61d0-47d2-b892-65922def98fb does not exist
Dec  3 18:16:55 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev fae9d4b0-52d8-43cb-94bb-9cb89009a975 does not exist
Dec  3 18:16:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:16:55 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:16:55 compute-0 python3.9[291832]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:56 compute-0 python3.9[291984]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:57 compute-0 python3.9[292136]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:16:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:16:58 compute-0 python3.9[292288]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:16:59 compute-0 python3.9[292440]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 18:16:59 compute-0 podman[158200]: time="2025-12-03T18:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:16:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Dec  3 18:16:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7274 "" "Go-http-client/1.1"
Dec  3 18:16:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:00 compute-0 python3.9[292592]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 18:17:00 compute-0 systemd[1]: Reloading.
Dec  3 18:17:00 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:17:00 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:17:01 compute-0 openstack_network_exporter[160319]: ERROR   18:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:17:01 compute-0 openstack_network_exporter[160319]: ERROR   18:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:17:01 compute-0 openstack_network_exporter[160319]: ERROR   18:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:17:01 compute-0 openstack_network_exporter[160319]: ERROR   18:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:17:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:17:01 compute-0 openstack_network_exporter[160319]: ERROR   18:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:17:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:17:02 compute-0 python3.9[292780]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:17:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:02 compute-0 python3.9[292933]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:17:02 compute-0 podman[292938]: 2025-12-03 18:17:02.946927307 +0000 UTC m=+0.088059737 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:17:02 compute-0 podman[292949]: 2025-12-03 18:17:02.949851748 +0000 UTC m=+0.102479373 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, io.buildah.version=1.29.0, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, managed_by=edpm_ansible, release=1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 18:17:02 compute-0 podman[292934]: 2025-12-03 18:17:02.967300223 +0000 UTC m=+0.136528853 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  3 18:17:02 compute-0 podman[292935]: 2025-12-03 18:17:02.973106231 +0000 UTC m=+0.141054381 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter)
Dec  3 18:17:02 compute-0 podman[292937]: 2025-12-03 18:17:02.980279791 +0000 UTC m=+0.125583981 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true)
Dec  3 18:17:02 compute-0 podman[292936]: 2025-12-03 18:17:02.99154206 +0000 UTC m=+0.146906470 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.701 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.701 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:17:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:17:03 compute-0 python3.9[293204]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:17:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.536959) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785824537025, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 3543557, "memory_usage": 3606976, "flush_reason": "Manual Compaction"}
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785824567402, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3468185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9704, "largest_seqno": 11747, "table_properties": {"data_size": 3458828, "index_size": 5979, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17882, "raw_average_key_size": 19, "raw_value_size": 3440354, "raw_average_value_size": 3747, "num_data_blocks": 271, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785590, "oldest_key_time": 1764785590, "file_creation_time": 1764785824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 30636 microseconds, and 15918 cpu microseconds.
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.567590) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3468185 bytes OK
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.567617) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.570796) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.570818) EVENT_LOG_v1 {"time_micros": 1764785824570810, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.570841) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3535022, prev total WAL file size 3535022, number of live WAL files 2.
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.572935) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3386KB)], [26(5982KB)]
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785824573039, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9593784, "oldest_snapshot_seqno": -1}
Dec  3 18:17:04 compute-0 python3.9[293358]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3697 keys, 7890550 bytes, temperature: kUnknown
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785824647278, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7890550, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7862090, "index_size": 18110, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88835, "raw_average_key_size": 24, "raw_value_size": 7791613, "raw_average_value_size": 2107, "num_data_blocks": 784, "num_entries": 3697, "num_filter_entries": 3697, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764785824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.647695) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7890550 bytes
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.649565) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.1 rd, 106.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.0) write-amplify(2.3) OK, records in: 4211, records dropped: 514 output_compression: NoCompression
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.649593) EVENT_LOG_v1 {"time_micros": 1764785824649581, "job": 10, "event": "compaction_finished", "compaction_time_micros": 74330, "compaction_time_cpu_micros": 17363, "output_level": 6, "num_output_files": 1, "total_output_size": 7890550, "num_input_records": 4211, "num_output_records": 3697, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785824650349, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764785824651667, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.572711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.651891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.651899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.651901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.651903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:17:04 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:17:04.651905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:17:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:05 compute-0 python3.9[293511]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:17:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:06 compute-0 python3.9[293664]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:17:07 compute-0 python3.9[293817]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:17:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:08 compute-0 python3.9[293970]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  3 18:17:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:10 compute-0 python3.9[294123]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 18:17:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:11 compute-0 python3.9[294207]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 18:17:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:17:13
Dec  3 18:17:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:17:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:17:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Dec  3 18:17:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:17:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:17:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:17:13 compute-0 python3.9[294360]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 18:17:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:17:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:17:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:17:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:17:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:17:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:17:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:17:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:17:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:17:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:17:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:17:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:17:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:17:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:17:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:14 compute-0 podman[294487]: 2025-12-03 18:17:14.710856562 +0000 UTC m=+0.097639276 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:17:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:14 compute-0 python3.9[294535]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 18:17:16 compute-0 python3.9[294695]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 18:17:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:17 compute-0 python3.9[294850]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 18:17:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:18 compute-0 python3.9[295005]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:19 compute-0 python3.9[295160]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:20 compute-0 python3.9[295316]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:21 compute-0 podman[295443]: 2025-12-03 18:17:21.621775144 +0000 UTC m=+0.091380877 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 18:17:21 compute-0 python3.9[295488]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:23 compute-0 python3.9[295643]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:17:23.299 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:17:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:17:23.300 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:17:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:17:23.300 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:17:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:17:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:24 compute-0 python3.9[295798]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 18:17:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:25 compute-0 python3.9[295953]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:26 compute-0 python3.9[296108]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:27 compute-0 python3.9[296263]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:28 compute-0 python3.9[296418]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:29 compute-0 podman[158200]: time="2025-12-03T18:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:17:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Dec  3 18:17:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7279 "" "Go-http-client/1.1"
Dec  3 18:17:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:30 compute-0 python3.9[296573]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:31 compute-0 python3.9[296728]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:31 compute-0 openstack_network_exporter[160319]: ERROR   18:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:17:31 compute-0 openstack_network_exporter[160319]: ERROR   18:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:17:31 compute-0 openstack_network_exporter[160319]: ERROR   18:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:17:31 compute-0 openstack_network_exporter[160319]: ERROR   18:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:17:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:17:31 compute-0 openstack_network_exporter[160319]: ERROR   18:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:17:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:17:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:32 compute-0 python3.9[296883]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:33 compute-0 python3.9[297038]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:33 compute-0 podman[297054]: 2025-12-03 18:17:33.41435267 +0000 UTC m=+0.089674656 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:17:33 compute-0 podman[297041]: 2025-12-03 18:17:33.414787831 +0000 UTC m=+0.104900590 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Dec  3 18:17:33 compute-0 podman[297043]: 2025-12-03 18:17:33.419235406 +0000 UTC m=+0.101841907 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec  3 18:17:33 compute-0 podman[297063]: 2025-12-03 18:17:33.441983398 +0000 UTC m=+0.107043130 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.buildah.version=1.29.0, distribution-scope=public, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc.)
Dec  3 18:17:33 compute-0 podman[297040]: 2025-12-03 18:17:33.447760646 +0000 UTC m=+0.134964436 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:17:33 compute-0 podman[297042]: 2025-12-03 18:17:33.47019146 +0000 UTC m=+0.160792580 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 18:17:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:34 compute-0 python3.9[297310]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:35 compute-0 python3.9[297465]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:36 compute-0 python3.9[297620]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:38 compute-0 python3.9[297775]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:39 compute-0 python3.9[297930]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:40 compute-0 python3.9[298085]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 18:17:41 compute-0 python3.9[298240]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:17:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:42 compute-0 python3.9[298392]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:17:43 compute-0 python3.9[298544]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:17:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:17:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:17:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:17:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:17:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:17:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:17:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:44 compute-0 python3.9[298696]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:17:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:44 compute-0 podman[298755]: 2025-12-03 18:17:44.93177603 +0000 UTC m=+0.101689023 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:17:45 compute-0 python3.9[298871]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:17:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:46 compute-0 python3.9[299023]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:17:47 compute-0 python3.9[299175]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:17:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:48 compute-0 python3.9[299253]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtlogd.conf _original_basename=virtlogd.conf recurse=False state=file path=/etc/libvirt/virtlogd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:17:49 compute-0 python3.9[299405]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:17:49 compute-0 python3.9[299484]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtnodedevd.conf _original_basename=virtnodedevd.conf recurse=False state=file path=/etc/libvirt/virtnodedevd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:17:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:50 compute-0 python3.9[299636]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:17:51 compute-0 python3.9[299714]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtproxyd.conf _original_basename=virtproxyd.conf recurse=False state=file path=/etc/libvirt/virtproxyd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:17:51 compute-0 podman[299795]: 2025-12-03 18:17:51.940358807 +0000 UTC m=+0.099784745 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:17:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:52 compute-0 python3.9[299885]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:17:53 compute-0 python3.9[299963]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtqemud.conf _original_basename=virtqemud.conf recurse=False state=file path=/etc/libvirt/virtqemud.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:17:53 compute-0 python3.9[300115]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:17:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:54 compute-0 python3.9[300193]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/qemu.conf _original_basename=qemu.conf.j2 recurse=False state=file path=/etc/libvirt/qemu.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:17:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:17:55 compute-0 python3.9[300345]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:17:55 compute-0 python3.9[300523]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtsecretd.conf _original_basename=virtsecretd.conf recurse=False state=file path=/etc/libvirt/virtsecretd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:17:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:56 compute-0 podman[300650]: 2025-12-03 18:17:56.263210007 +0000 UTC m=+0.099128889 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:17:56 compute-0 podman[300650]: 2025-12-03 18:17:56.391578752 +0000 UTC m=+0.227497634 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:17:56 compute-0 python3.9[300789]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:17:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:17:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:17:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:17:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:17:57 compute-0 python3.9[300943]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0600 owner=libvirt dest=/etc/libvirt/auth.conf _original_basename=auth.conf recurse=False state=file path=/etc/libvirt/auth.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:17:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:17:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:17:58 compute-0 python3.9[301234]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:17:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:17:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:17:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:17:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:17:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:17:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:17:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 72718271-e551-4714-8d4e-fd304e349f97 does not exist
Dec  3 18:17:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c9f1639a-e5fc-4dbe-937c-ac72e4c81126 does not exist
Dec  3 18:17:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d0edb05e-ea20-4a13-bf59-08ffba3fcf85 does not exist
Dec  3 18:17:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:17:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:17:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:17:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:17:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:17:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:17:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:17:58 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:17:58 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:17:58 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:17:58 compute-0 python3.9[301417]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/sasl2/libvirt.conf _original_basename=sasl_libvirt.conf recurse=False state=file path=/etc/sasl2/libvirt.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:17:59 compute-0 podman[301494]: 2025-12-03 18:17:59.057926709 +0000 UTC m=+0.055751799 container create 3718b01c8b907146a873ec619ab4cd270e9f31540423dad269d4a2b71fbc44b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:17:59 compute-0 systemd[1]: Started libpod-conmon-3718b01c8b907146a873ec619ab4cd270e9f31540423dad269d4a2b71fbc44b8.scope.
Dec  3 18:17:59 compute-0 podman[301494]: 2025-12-03 18:17:59.033036389 +0000 UTC m=+0.030861509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:17:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:17:59 compute-0 podman[301494]: 2025-12-03 18:17:59.184307425 +0000 UTC m=+0.182132515 container init 3718b01c8b907146a873ec619ab4cd270e9f31540423dad269d4a2b71fbc44b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wescoff, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:17:59 compute-0 podman[301494]: 2025-12-03 18:17:59.193679899 +0000 UTC m=+0.191505009 container start 3718b01c8b907146a873ec619ab4cd270e9f31540423dad269d4a2b71fbc44b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec  3 18:17:59 compute-0 podman[301494]: 2025-12-03 18:17:59.199328539 +0000 UTC m=+0.197153639 container attach 3718b01c8b907146a873ec619ab4cd270e9f31540423dad269d4a2b71fbc44b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:17:59 compute-0 practical_wescoff[301554]: 167 167
Dec  3 18:17:59 compute-0 systemd[1]: libpod-3718b01c8b907146a873ec619ab4cd270e9f31540423dad269d4a2b71fbc44b8.scope: Deactivated successfully.
Dec  3 18:17:59 compute-0 podman[301494]: 2025-12-03 18:17:59.205418951 +0000 UTC m=+0.203244041 container died 3718b01c8b907146a873ec619ab4cd270e9f31540423dad269d4a2b71fbc44b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wescoff, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd17f857856fcbe7a26868b8200c7ae0d375f485fedb17c54ed29e5ee8b561fc-merged.mount: Deactivated successfully.
Dec  3 18:17:59 compute-0 podman[301494]: 2025-12-03 18:17:59.2620447 +0000 UTC m=+0.259869790 container remove 3718b01c8b907146a873ec619ab4cd270e9f31540423dad269d4a2b71fbc44b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:17:59 compute-0 systemd[1]: libpod-conmon-3718b01c8b907146a873ec619ab4cd270e9f31540423dad269d4a2b71fbc44b8.scope: Deactivated successfully.
Dec  3 18:17:59 compute-0 podman[301632]: 2025-12-03 18:17:59.453866156 +0000 UTC m=+0.070720861 container create c7e2263f8b9b7d39ff3fd0b190423d3259b0cbdf389bcfda4e6713a24e064f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:17:59 compute-0 podman[301632]: 2025-12-03 18:17:59.425841998 +0000 UTC m=+0.042696703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:17:59 compute-0 systemd[1]: Started libpod-conmon-c7e2263f8b9b7d39ff3fd0b190423d3259b0cbdf389bcfda4e6713a24e064f81.scope.
Dec  3 18:17:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990852b0551995d58ea358e019ffac0755557a637467ef15380869eb7f2b245f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990852b0551995d58ea358e019ffac0755557a637467ef15380869eb7f2b245f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990852b0551995d58ea358e019ffac0755557a637467ef15380869eb7f2b245f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990852b0551995d58ea358e019ffac0755557a637467ef15380869eb7f2b245f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990852b0551995d58ea358e019ffac0755557a637467ef15380869eb7f2b245f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:17:59 compute-0 podman[301632]: 2025-12-03 18:17:59.631415887 +0000 UTC m=+0.248270582 container init c7e2263f8b9b7d39ff3fd0b190423d3259b0cbdf389bcfda4e6713a24e064f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:17:59 compute-0 podman[301632]: 2025-12-03 18:17:59.647512498 +0000 UTC m=+0.264367163 container start c7e2263f8b9b7d39ff3fd0b190423d3259b0cbdf389bcfda4e6713a24e064f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:17:59 compute-0 podman[301632]: 2025-12-03 18:17:59.652327177 +0000 UTC m=+0.269181842 container attach c7e2263f8b9b7d39ff3fd0b190423d3259b0cbdf389bcfda4e6713a24e064f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:17:59 compute-0 python3.9[301673]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  3 18:17:59 compute-0 podman[158200]: time="2025-12-03T18:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:17:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 37437 "" "Go-http-client/1.1"
Dec  3 18:17:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7697 "" "Go-http-client/1.1"
Dec  3 18:17:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:00 compute-0 python3.9[301842]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:00 compute-0 strange_haslett[301677]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:18:00 compute-0 strange_haslett[301677]: --> relative data size: 1.0
Dec  3 18:18:00 compute-0 strange_haslett[301677]: --> All data devices are unavailable
Dec  3 18:18:00 compute-0 systemd[1]: libpod-c7e2263f8b9b7d39ff3fd0b190423d3259b0cbdf389bcfda4e6713a24e064f81.scope: Deactivated successfully.
Dec  3 18:18:00 compute-0 systemd[1]: libpod-c7e2263f8b9b7d39ff3fd0b190423d3259b0cbdf389bcfda4e6713a24e064f81.scope: Consumed 1.243s CPU time.
Dec  3 18:18:00 compute-0 podman[301632]: 2025-12-03 18:18:00.97424659 +0000 UTC m=+1.591101285 container died c7e2263f8b9b7d39ff3fd0b190423d3259b0cbdf389bcfda4e6713a24e064f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-990852b0551995d58ea358e019ffac0755557a637467ef15380869eb7f2b245f-merged.mount: Deactivated successfully.
Dec  3 18:18:01 compute-0 podman[301632]: 2025-12-03 18:18:01.063114573 +0000 UTC m=+1.679969238 container remove c7e2263f8b9b7d39ff3fd0b190423d3259b0cbdf389bcfda4e6713a24e064f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:18:01 compute-0 systemd[1]: libpod-conmon-c7e2263f8b9b7d39ff3fd0b190423d3259b0cbdf389bcfda4e6713a24e064f81.scope: Deactivated successfully.
Dec  3 18:18:01 compute-0 openstack_network_exporter[160319]: ERROR   18:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:18:01 compute-0 openstack_network_exporter[160319]: ERROR   18:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:18:01 compute-0 openstack_network_exporter[160319]: ERROR   18:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:18:01 compute-0 openstack_network_exporter[160319]: ERROR   18:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:18:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:18:01 compute-0 openstack_network_exporter[160319]: ERROR   18:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:18:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:18:01 compute-0 python3.9[302122]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:01 compute-0 podman[302203]: 2025-12-03 18:18:01.950821715 +0000 UTC m=+0.055644027 container create 0cc218751591177113bc31289f11489c30adacd97aefe8d5e6133b2e672ddd7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:18:02 compute-0 systemd[1]: Started libpod-conmon-0cc218751591177113bc31289f11489c30adacd97aefe8d5e6133b2e672ddd7e.scope.
Dec  3 18:18:02 compute-0 podman[302203]: 2025-12-03 18:18:01.924785366 +0000 UTC m=+0.029607688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:18:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:18:02 compute-0 podman[302203]: 2025-12-03 18:18:02.070131375 +0000 UTC m=+0.174953727 container init 0cc218751591177113bc31289f11489c30adacd97aefe8d5e6133b2e672ddd7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 18:18:02 compute-0 podman[302203]: 2025-12-03 18:18:02.085329274 +0000 UTC m=+0.190151576 container start 0cc218751591177113bc31289f11489c30adacd97aefe8d5e6133b2e672ddd7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:18:02 compute-0 podman[302203]: 2025-12-03 18:18:02.090290957 +0000 UTC m=+0.195113309 container attach 0cc218751591177113bc31289f11489c30adacd97aefe8d5e6133b2e672ddd7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:18:02 compute-0 reverent_wilbur[302247]: 167 167
Dec  3 18:18:02 compute-0 systemd[1]: libpod-0cc218751591177113bc31289f11489c30adacd97aefe8d5e6133b2e672ddd7e.scope: Deactivated successfully.
Dec  3 18:18:02 compute-0 podman[302203]: 2025-12-03 18:18:02.098691266 +0000 UTC m=+0.203513588 container died 0cc218751591177113bc31289f11489c30adacd97aefe8d5e6133b2e672ddd7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cb5be8094fecb4af948ae6b0f7447889f5afb7635f4b666347dd13ab8435170-merged.mount: Deactivated successfully.
Dec  3 18:18:02 compute-0 podman[302203]: 2025-12-03 18:18:02.152981038 +0000 UTC m=+0.257803380 container remove 0cc218751591177113bc31289f11489c30adacd97aefe8d5e6133b2e672ddd7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 18:18:02 compute-0 systemd[1]: libpod-conmon-0cc218751591177113bc31289f11489c30adacd97aefe8d5e6133b2e672ddd7e.scope: Deactivated successfully.
Dec  3 18:18:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:02 compute-0 podman[302324]: 2025-12-03 18:18:02.384307997 +0000 UTC m=+0.061158773 container create 0e502febd345c3ad5c55cb8124b29fe6449e238cfc05a142afef02a26c09c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:18:02 compute-0 systemd[1]: Started libpod-conmon-0e502febd345c3ad5c55cb8124b29fe6449e238cfc05a142afef02a26c09c83e.scope.
Dec  3 18:18:02 compute-0 podman[302324]: 2025-12-03 18:18:02.362271229 +0000 UTC m=+0.039122035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:18:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd376309869c01740f5ad1564b3cab2bde0e3d983cbbefb4dbb78075a32ca9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd376309869c01740f5ad1564b3cab2bde0e3d983cbbefb4dbb78075a32ca9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd376309869c01740f5ad1564b3cab2bde0e3d983cbbefb4dbb78075a32ca9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd376309869c01740f5ad1564b3cab2bde0e3d983cbbefb4dbb78075a32ca9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:18:02 compute-0 podman[302324]: 2025-12-03 18:18:02.505358862 +0000 UTC m=+0.182209668 container init 0e502febd345c3ad5c55cb8124b29fe6449e238cfc05a142afef02a26c09c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:18:02 compute-0 podman[302324]: 2025-12-03 18:18:02.514644853 +0000 UTC m=+0.191495629 container start 0e502febd345c3ad5c55cb8124b29fe6449e238cfc05a142afef02a26c09c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:18:02 compute-0 podman[302324]: 2025-12-03 18:18:02.5197616 +0000 UTC m=+0.196612396 container attach 0e502febd345c3ad5c55cb8124b29fe6449e238cfc05a142afef02a26c09c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Dec  3 18:18:02 compute-0 python3.9[302362]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:03 compute-0 boring_fermi[302365]: {
Dec  3 18:18:03 compute-0 boring_fermi[302365]:    "0": [
Dec  3 18:18:03 compute-0 boring_fermi[302365]:        {
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "devices": [
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "/dev/loop3"
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            ],
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_name": "ceph_lv0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_size": "21470642176",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "name": "ceph_lv0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "tags": {
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.cluster_name": "ceph",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.crush_device_class": "",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.encrypted": "0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.osd_id": "0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.type": "block",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.vdo": "0"
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            },
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "type": "block",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "vg_name": "ceph_vg0"
Dec  3 18:18:03 compute-0 boring_fermi[302365]:        }
Dec  3 18:18:03 compute-0 boring_fermi[302365]:    ],
Dec  3 18:18:03 compute-0 boring_fermi[302365]:    "1": [
Dec  3 18:18:03 compute-0 boring_fermi[302365]:        {
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "devices": [
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "/dev/loop4"
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            ],
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_name": "ceph_lv1",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_size": "21470642176",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "name": "ceph_lv1",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "tags": {
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.cluster_name": "ceph",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.crush_device_class": "",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.encrypted": "0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.osd_id": "1",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.type": "block",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.vdo": "0"
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            },
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "type": "block",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "vg_name": "ceph_vg1"
Dec  3 18:18:03 compute-0 boring_fermi[302365]:        }
Dec  3 18:18:03 compute-0 boring_fermi[302365]:    ],
Dec  3 18:18:03 compute-0 boring_fermi[302365]:    "2": [
Dec  3 18:18:03 compute-0 boring_fermi[302365]:        {
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "devices": [
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "/dev/loop5"
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            ],
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_name": "ceph_lv2",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_size": "21470642176",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "name": "ceph_lv2",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "tags": {
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.cluster_name": "ceph",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.crush_device_class": "",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.encrypted": "0",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.osd_id": "2",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.type": "block",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:                "ceph.vdo": "0"
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            },
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "type": "block",
Dec  3 18:18:03 compute-0 boring_fermi[302365]:            "vg_name": "ceph_vg2"
Dec  3 18:18:03 compute-0 boring_fermi[302365]:        }
Dec  3 18:18:03 compute-0 boring_fermi[302365]:    ]
Dec  3 18:18:03 compute-0 boring_fermi[302365]: }
Dec  3 18:18:03 compute-0 systemd[1]: libpod-0e502febd345c3ad5c55cb8124b29fe6449e238cfc05a142afef02a26c09c83e.scope: Deactivated successfully.
Dec  3 18:18:03 compute-0 podman[302324]: 2025-12-03 18:18:03.369776613 +0000 UTC m=+1.046627389 container died 0e502febd345c3ad5c55cb8124b29fe6449e238cfc05a142afef02a26c09c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 18:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bd376309869c01740f5ad1564b3cab2bde0e3d983cbbefb4dbb78075a32ca9a-merged.mount: Deactivated successfully.
Dec  3 18:18:03 compute-0 podman[302324]: 2025-12-03 18:18:03.441941341 +0000 UTC m=+1.118792117 container remove 0e502febd345c3ad5c55cb8124b29fe6449e238cfc05a142afef02a26c09c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:18:03 compute-0 python3.9[302523]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:03 compute-0 systemd[1]: libpod-conmon-0e502febd345c3ad5c55cb8124b29fe6449e238cfc05a142afef02a26c09c83e.scope: Deactivated successfully.
Dec  3 18:18:03 compute-0 podman[302540]: 2025-12-03 18:18:03.628511896 +0000 UTC m=+0.124632105 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:18:03 compute-0 podman[302542]: 2025-12-03 18:18:03.641794107 +0000 UTC m=+0.113631191 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  3 18:18:03 compute-0 podman[302547]: 2025-12-03 18:18:03.645772955 +0000 UTC m=+0.122503121 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=edpm, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, distribution-scope=public, release-0.7.12=)
Dec  3 18:18:03 compute-0 podman[302544]: 2025-12-03 18:18:03.651639062 +0000 UTC m=+0.137738641 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:18:03 compute-0 podman[302541]: 2025-12-03 18:18:03.666261965 +0000 UTC m=+0.155950393 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, version=9.6)
Dec  3 18:18:03 compute-0 podman[302560]: 2025-12-03 18:18:03.697064152 +0000 UTC m=+0.172133846 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 18:18:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:04 compute-0 python3.9[302936]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:04 compute-0 podman[302944]: 2025-12-03 18:18:04.28538812 +0000 UTC m=+0.090244057 container create 4baf204f71c546aa0ee803bc553ae1552c159915451fcdabc7909d38b6cc3275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:18:04 compute-0 podman[302944]: 2025-12-03 18:18:04.250585923 +0000 UTC m=+0.055441880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:18:04 compute-0 systemd[1]: Started libpod-conmon-4baf204f71c546aa0ee803bc553ae1552c159915451fcdabc7909d38b6cc3275.scope.
Dec  3 18:18:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:18:04 compute-0 podman[302944]: 2025-12-03 18:18:04.408901005 +0000 UTC m=+0.213757032 container init 4baf204f71c546aa0ee803bc553ae1552c159915451fcdabc7909d38b6cc3275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:18:04 compute-0 podman[302944]: 2025-12-03 18:18:04.421979201 +0000 UTC m=+0.226835138 container start 4baf204f71c546aa0ee803bc553ae1552c159915451fcdabc7909d38b6cc3275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 18:18:04 compute-0 podman[302944]: 2025-12-03 18:18:04.427374086 +0000 UTC m=+0.232230103 container attach 4baf204f71c546aa0ee803bc553ae1552c159915451fcdabc7909d38b6cc3275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:18:04 compute-0 unruffled_swirles[302961]: 167 167
Dec  3 18:18:04 compute-0 systemd[1]: libpod-4baf204f71c546aa0ee803bc553ae1552c159915451fcdabc7909d38b6cc3275.scope: Deactivated successfully.
Dec  3 18:18:04 compute-0 podman[302944]: 2025-12-03 18:18:04.44441531 +0000 UTC m=+0.249271277 container died 4baf204f71c546aa0ee803bc553ae1552c159915451fcdabc7909d38b6cc3275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:18:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac6ea618ef26dc8e5790a2126331a1bf4e3b637407647ea0577cd0af72040f2c-merged.mount: Deactivated successfully.
Dec  3 18:18:04 compute-0 podman[302944]: 2025-12-03 18:18:04.514926175 +0000 UTC m=+0.319782112 container remove 4baf204f71c546aa0ee803bc553ae1552c159915451fcdabc7909d38b6cc3275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:18:04 compute-0 systemd[1]: libpod-conmon-4baf204f71c546aa0ee803bc553ae1552c159915451fcdabc7909d38b6cc3275.scope: Deactivated successfully.
Dec  3 18:18:04 compute-0 podman[303061]: 2025-12-03 18:18:04.716245918 +0000 UTC m=+0.047687339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:18:04 compute-0 podman[303061]: 2025-12-03 18:18:04.808363591 +0000 UTC m=+0.139804922 container create 6ac4bfcc4c0fb0b78ef66d6c1622933cfa79a1f16ba8c36cf0625bbf76267417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_easley, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:18:04 compute-0 systemd[1]: Started libpod-conmon-6ac4bfcc4c0fb0b78ef66d6c1622933cfa79a1f16ba8c36cf0625bbf76267417.scope.
Dec  3 18:18:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f175e1880cc94bd246b8329c952ac7e09643a7d09f28815597ad047bc951b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f175e1880cc94bd246b8329c952ac7e09643a7d09f28815597ad047bc951b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f175e1880cc94bd246b8329c952ac7e09643a7d09f28815597ad047bc951b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f175e1880cc94bd246b8329c952ac7e09643a7d09f28815597ad047bc951b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:18:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:04 compute-0 podman[303061]: 2025-12-03 18:18:04.943332341 +0000 UTC m=+0.274773682 container init 6ac4bfcc4c0fb0b78ef66d6c1622933cfa79a1f16ba8c36cf0625bbf76267417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:18:04 compute-0 podman[303061]: 2025-12-03 18:18:04.958573441 +0000 UTC m=+0.290014772 container start 6ac4bfcc4c0fb0b78ef66d6c1622933cfa79a1f16ba8c36cf0625bbf76267417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_easley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:18:04 compute-0 podman[303061]: 2025-12-03 18:18:04.963433622 +0000 UTC m=+0.294874983 container attach 6ac4bfcc4c0fb0b78ef66d6c1622933cfa79a1f16ba8c36cf0625bbf76267417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:18:05 compute-0 python3.9[303158]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:06 compute-0 unruffled_easley[303125]: {
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "osd_id": 1,
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "type": "bluestore"
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:    },
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "osd_id": 2,
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "type": "bluestore"
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:    },
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "osd_id": 0,
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:        "type": "bluestore"
Dec  3 18:18:06 compute-0 unruffled_easley[303125]:    }
Dec  3 18:18:06 compute-0 unruffled_easley[303125]: }
Dec  3 18:18:06 compute-0 systemd[1]: libpod-6ac4bfcc4c0fb0b78ef66d6c1622933cfa79a1f16ba8c36cf0625bbf76267417.scope: Deactivated successfully.
Dec  3 18:18:06 compute-0 podman[303061]: 2025-12-03 18:18:06.090599616 +0000 UTC m=+1.422040947 container died 6ac4bfcc4c0fb0b78ef66d6c1622933cfa79a1f16ba8c36cf0625bbf76267417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_easley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:18:06 compute-0 systemd[1]: libpod-6ac4bfcc4c0fb0b78ef66d6c1622933cfa79a1f16ba8c36cf0625bbf76267417.scope: Consumed 1.121s CPU time.
Dec  3 18:18:06 compute-0 python3.9[303327]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5f175e1880cc94bd246b8329c952ac7e09643a7d09f28815597ad047bc951b8-merged.mount: Deactivated successfully.
Dec  3 18:18:06 compute-0 podman[303061]: 2025-12-03 18:18:06.155942953 +0000 UTC m=+1.487384284 container remove 6ac4bfcc4c0fb0b78ef66d6c1622933cfa79a1f16ba8c36cf0625bbf76267417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_easley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:18:06 compute-0 systemd[1]: libpod-conmon-6ac4bfcc4c0fb0b78ef66d6c1622933cfa79a1f16ba8c36cf0625bbf76267417.scope: Deactivated successfully.
Dec  3 18:18:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:18:06 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:18:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:18:06 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:18:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:06 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c2d8a9b6-6af1-425c-9b66-d2bf323145e9 does not exist
Dec  3 18:18:06 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 46dd2835-a3bb-4152-a6ed-863073773fc9 does not exist
Dec  3 18:18:06 compute-0 python3.9[303551]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:07 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:18:07 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:18:07 compute-0 python3.9[303703]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:08 compute-0 python3.9[303855]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:09 compute-0 python3.9[304007]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:10 compute-0 python3.9[304159]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:11 compute-0 python3.9[304311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:12 compute-0 python3.9[304463]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:13 compute-0 python3.9[304615]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:18:13
Dec  3 18:18:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:18:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:18:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta']
Dec  3 18:18:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:18:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:18:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:18:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:18:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:18:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:18:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:14 compute-0 python3.9[304693]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:14 compute-0 ceph-mgr[193091]: client.0 ms_handle_reset on v2:192.168.122.100:6800/817799961
Dec  3 18:18:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:15 compute-0 python3.9[304845]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:15 compute-0 podman[304895]: 2025-12-03 18:18:15.583708833 +0000 UTC m=+0.098025982 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:18:15 compute-0 python3.9[304946]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:16 compute-0 python3.9[305098]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:17 compute-0 python3.9[305176]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:18 compute-0 python3.9[305328]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:18 compute-0 python3.9[305406]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:19 compute-0 python3.9[305559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:20 compute-0 python3.9[305637]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:21 compute-0 python3.9[305789]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:22 compute-0 python3.9[305867]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:22 compute-0 podman[305991]: 2025-12-03 18:18:22.716536823 +0000 UTC m=+0.088809081 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 18:18:22 compute-0 python3.9[306036]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:18:23.302 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:18:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:18:23.303 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:18:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:18:23.303 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:18:23 compute-0 python3.9[306115]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:18:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:18:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:24 compute-0 python3.9[306267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:25 compute-0 python3.9[306345]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:26 compute-0 python3.9[306497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:26 compute-0 python3.9[306575]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:27 compute-0 python3.9[306727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:28 compute-0 python3.9[306805]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:29 compute-0 python3.9[306957]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:29 compute-0 podman[158200]: time="2025-12-03T18:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:18:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Dec  3 18:18:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7289 "" "Go-http-client/1.1"
Dec  3 18:18:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:30 compute-0 python3.9[307035]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:31 compute-0 python3.9[307187]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:31 compute-0 openstack_network_exporter[160319]: ERROR   18:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:18:31 compute-0 openstack_network_exporter[160319]: ERROR   18:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:18:31 compute-0 openstack_network_exporter[160319]: ERROR   18:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:18:31 compute-0 openstack_network_exporter[160319]: ERROR   18:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:18:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:18:31 compute-0 openstack_network_exporter[160319]: ERROR   18:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:18:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:18:31 compute-0 python3.9[307265]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:33 compute-0 python3.9[307417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:33 compute-0 podman[307496]: 2025-12-03 18:18:33.924824024 +0000 UTC m=+0.108333708 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal)
Dec  3 18:18:33 compute-0 podman[307504]: 2025-12-03 18:18:33.938188927 +0000 UTC m=+0.103475927 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:18:33 compute-0 podman[307495]: 2025-12-03 18:18:33.94073667 +0000 UTC m=+0.142067168 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 18:18:33 compute-0 podman[307505]: 2025-12-03 18:18:33.948614356 +0000 UTC m=+0.107310633 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, version=9.4, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm)
Dec  3 18:18:33 compute-0 podman[307500]: 2025-12-03 18:18:33.974139422 +0000 UTC m=+0.146007446 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 18:18:33 compute-0 podman[307497]: 2025-12-03 18:18:33.981765612 +0000 UTC m=+0.167091152 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:18:34 compute-0 python3.9[307517]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:34 compute-0 python3.9[307762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:35 compute-0 python3.9[307840]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:36 compute-0 python3.9[307990]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:18:38 compute-0 python3.9[308145]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  3 18:18:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:39 compute-0 python3.9[308297]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:40 compute-0 python3.9[308449]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:41 compute-0 python3.9[308601]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:42 compute-0 python3.9[308753]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:18:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:18:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:18:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:18:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:18:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:18:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:44 compute-0 python3.9[308905]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:45 compute-0 python3.9[309057]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:45 compute-0 podman[309058]: 2025-12-03 18:18:45.945080171 +0000 UTC m=+0.104061371 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:18:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:47 compute-0 python3.9[309232]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:48 compute-0 python3.9[309384]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:48 compute-0 python3.9[309536]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:49 compute-0 python3.9[309688]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:50 compute-0 python3.9[309841]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:51 compute-0 python3.9[309993]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 18:18:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:52 compute-0 python3.9[310145]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:18:52 compute-0 podman[310174]: 2025-12-03 18:18:52.983806019 +0000 UTC m=+0.136500989 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 18:18:53 compute-0 python3.9[310317]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 18:18:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:54 compute-0 python3.9[310467]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:18:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:18:55 compute-0 python3.9[310588]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764785933.9163215-1017-122103280746636/.source.xml follow=False _original_basename=secret.xml.j2 checksum=b70a8b2c7d6e9468dc649d953ae0f87075e4ba78 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:56 compute-0 python3.9[310740]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine c1caf3ba-b2a5-5005-a11e-e955c344dccc#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:18:56 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 18:18:56 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 18:18:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:18:58 compute-0 python3.9[310921]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:18:59 compute-0 podman[158200]: time="2025-12-03T18:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:18:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Dec  3 18:18:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7282 "" "Go-http-client/1.1"
Dec  3 18:18:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:01 compute-0 openstack_network_exporter[160319]: ERROR   18:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:19:01 compute-0 openstack_network_exporter[160319]: ERROR   18:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:19:01 compute-0 openstack_network_exporter[160319]: ERROR   18:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:19:01 compute-0 openstack_network_exporter[160319]: ERROR   18:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:19:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:19:01 compute-0 openstack_network_exporter[160319]: ERROR   18:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:19:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:19:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:02 compute-0 python3.9[311384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.702 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.703 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.714 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:19:03.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:19:03 compute-0 python3.9[311537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:04 compute-0 podman[311592]: 2025-12-03 18:19:04.417612624 +0000 UTC m=+0.105346764 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, version=9.6)
Dec  3 18:19:04 compute-0 podman[311588]: 2025-12-03 18:19:04.429437019 +0000 UTC m=+0.114738937 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  3 18:19:04 compute-0 podman[311597]: 2025-12-03 18:19:04.448148424 +0000 UTC m=+0.130577232 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:19:04 compute-0 podman[311598]: 2025-12-03 18:19:04.452006321 +0000 UTC m=+0.133492865 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:19:04 compute-0 podman[311595]: 2025-12-03 18:19:04.453324303 +0000 UTC m=+0.139165795 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  3 18:19:04 compute-0 podman[311600]: 2025-12-03 18:19:04.454319518 +0000 UTC m=+0.128161622 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, name=ubi9, vcs-type=git, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, architecture=x86_64, distribution-scope=public)
Dec  3 18:19:04 compute-0 python3.9[311671]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/libvirt.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/libvirt.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:05 compute-0 python3.9[311886]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:06 compute-0 python3.9[312038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:19:07 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:19:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:19:07 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:19:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:19:07 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:19:07 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9f2f59f5-00fb-45b8-b890-a28ea9481261 does not exist
Dec  3 18:19:07 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6fdc2009-ed6c-44f8-9e8a-82d5faa97d86 does not exist
Dec  3 18:19:07 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 573de29f-593c-4d88-96c8-2fdcb34eb982 does not exist
Dec  3 18:19:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:19:07 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:19:07 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:19:07 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:19:07 compute-0 python3.9[312232]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:19:07 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:19:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:19:07 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:19:08 compute-0 podman[312513]: 2025-12-03 18:19:08.122672502 +0000 UTC m=+0.059897572 container create 1f5144d21a5caa92923233dec5df7b51726a534a67ac611e4a8d697e12ebe78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_haslett, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:19:08 compute-0 systemd[1]: Started libpod-conmon-1f5144d21a5caa92923233dec5df7b51726a534a67ac611e4a8d697e12ebe78b.scope.
Dec  3 18:19:08 compute-0 podman[312513]: 2025-12-03 18:19:08.098944281 +0000 UTC m=+0.036169361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:19:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:19:08 compute-0 podman[312513]: 2025-12-03 18:19:08.233569793 +0000 UTC m=+0.170794873 container init 1f5144d21a5caa92923233dec5df7b51726a534a67ac611e4a8d697e12ebe78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 18:19:08 compute-0 podman[312513]: 2025-12-03 18:19:08.249755406 +0000 UTC m=+0.186980456 container start 1f5144d21a5caa92923233dec5df7b51726a534a67ac611e4a8d697e12ebe78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_haslett, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:19:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:08 compute-0 podman[312513]: 2025-12-03 18:19:08.254576726 +0000 UTC m=+0.191801836 container attach 1f5144d21a5caa92923233dec5df7b51726a534a67ac611e4a8d697e12ebe78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_haslett, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:19:08 compute-0 reverent_haslett[312555]: 167 167
Dec  3 18:19:08 compute-0 systemd[1]: libpod-1f5144d21a5caa92923233dec5df7b51726a534a67ac611e4a8d697e12ebe78b.scope: Deactivated successfully.
Dec  3 18:19:08 compute-0 podman[312513]: 2025-12-03 18:19:08.260414921 +0000 UTC m=+0.197639971 container died 1f5144d21a5caa92923233dec5df7b51726a534a67ac611e4a8d697e12ebe78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_haslett, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-91a4f60df84499ca76528e303d574f58800395c9b2a29b81da54040e9043f852-merged.mount: Deactivated successfully.
Dec  3 18:19:08 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:19:08 compute-0 python3.9[312552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:08 compute-0 podman[312513]: 2025-12-03 18:19:08.344022813 +0000 UTC m=+0.281247863 container remove 1f5144d21a5caa92923233dec5df7b51726a534a67ac611e4a8d697e12ebe78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:19:08 compute-0 systemd[1]: libpod-conmon-1f5144d21a5caa92923233dec5df7b51726a534a67ac611e4a8d697e12ebe78b.scope: Deactivated successfully.
Dec  3 18:19:08 compute-0 podman[312584]: 2025-12-03 18:19:08.542762982 +0000 UTC m=+0.060450536 container create b6e1c8a6a07513b774db5431ce1cd2ac97a1328113bb6e889ba4d524f4bc5364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 18:19:08 compute-0 systemd[1]: Started libpod-conmon-b6e1c8a6a07513b774db5431ce1cd2ac97a1328113bb6e889ba4d524f4bc5364.scope.
Dec  3 18:19:08 compute-0 podman[312584]: 2025-12-03 18:19:08.516089157 +0000 UTC m=+0.033776721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:19:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48262b6a336028043a8f7afd5a69ae180ce40c5ccd4befea0ba02b2db7d24176/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48262b6a336028043a8f7afd5a69ae180ce40c5ccd4befea0ba02b2db7d24176/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48262b6a336028043a8f7afd5a69ae180ce40c5ccd4befea0ba02b2db7d24176/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48262b6a336028043a8f7afd5a69ae180ce40c5ccd4befea0ba02b2db7d24176/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48262b6a336028043a8f7afd5a69ae180ce40c5ccd4befea0ba02b2db7d24176/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:08 compute-0 podman[312584]: 2025-12-03 18:19:08.665187319 +0000 UTC m=+0.182874863 container init b6e1c8a6a07513b774db5431ce1cd2ac97a1328113bb6e889ba4d524f4bc5364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Dec  3 18:19:08 compute-0 podman[312584]: 2025-12-03 18:19:08.683651109 +0000 UTC m=+0.201338643 container start b6e1c8a6a07513b774db5431ce1cd2ac97a1328113bb6e889ba4d524f4bc5364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:19:08 compute-0 podman[312584]: 2025-12-03 18:19:08.688544931 +0000 UTC m=+0.206232465 container attach b6e1c8a6a07513b774db5431ce1cd2ac97a1328113bb6e889ba4d524f4bc5364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:19:09 compute-0 python3.9[312680]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.d40xkyha recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:09 compute-0 lucid_fermi[312600]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:19:09 compute-0 lucid_fermi[312600]: --> relative data size: 1.0
Dec  3 18:19:09 compute-0 lucid_fermi[312600]: --> All data devices are unavailable
Dec  3 18:19:09 compute-0 systemd[1]: libpod-b6e1c8a6a07513b774db5431ce1cd2ac97a1328113bb6e889ba4d524f4bc5364.scope: Deactivated successfully.
Dec  3 18:19:09 compute-0 systemd[1]: libpod-b6e1c8a6a07513b774db5431ce1cd2ac97a1328113bb6e889ba4d524f4bc5364.scope: Consumed 1.067s CPU time.
Dec  3 18:19:09 compute-0 podman[312584]: 2025-12-03 18:19:09.828093723 +0000 UTC m=+1.345781297 container died b6e1c8a6a07513b774db5431ce1cd2ac97a1328113bb6e889ba4d524f4bc5364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:19:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:10 compute-0 python3.9[312867]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-48262b6a336028043a8f7afd5a69ae180ce40c5ccd4befea0ba02b2db7d24176-merged.mount: Deactivated successfully.
Dec  3 18:19:10 compute-0 python3.9[312946]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:11 compute-0 python3.9[313098]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:19:12 compute-0 podman[312584]: 2025-12-03 18:19:12.080199945 +0000 UTC m=+3.597887509 container remove b6e1c8a6a07513b774db5431ce1cd2ac97a1328113bb6e889ba4d524f4bc5364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:19:12 compute-0 systemd[1]: libpod-conmon-b6e1c8a6a07513b774db5431ce1cd2ac97a1328113bb6e889ba4d524f4bc5364.scope: Deactivated successfully.
Dec  3 18:19:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:12 compute-0 python3[313358]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 18:19:13 compute-0 podman[313388]: 2025-12-03 18:19:13.104549929 +0000 UTC m=+0.060931908 container create eb5b78d259f0655482f9851133de2ff5cc7766c118f26d0dabf4d5ba22b225d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chatelet, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:19:13 compute-0 podman[313388]: 2025-12-03 18:19:13.067031325 +0000 UTC m=+0.023413304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:19:13 compute-0 systemd[1]: Started libpod-conmon-eb5b78d259f0655482f9851133de2ff5cc7766c118f26d0dabf4d5ba22b225d4.scope.
Dec  3 18:19:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:19:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:19:13
Dec  3 18:19:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:19:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:19:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images', '.mgr', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'vms']
Dec  3 18:19:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:19:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:19:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:19:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:19:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:19:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:19:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:19:14 compute-0 python3.9[313558]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:14 compute-0 podman[313388]: 2025-12-03 18:19:14.04044012 +0000 UTC m=+0.996822149 container init eb5b78d259f0655482f9851133de2ff5cc7766c118f26d0dabf4d5ba22b225d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:19:14 compute-0 podman[313388]: 2025-12-03 18:19:14.051006384 +0000 UTC m=+1.007388383 container start eb5b78d259f0655482f9851133de2ff5cc7766c118f26d0dabf4d5ba22b225d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:19:14 compute-0 distracted_chatelet[313504]: 167 167
Dec  3 18:19:14 compute-0 systemd[1]: libpod-eb5b78d259f0655482f9851133de2ff5cc7766c118f26d0dabf4d5ba22b225d4.scope: Deactivated successfully.
Dec  3 18:19:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:19:14 compute-0 podman[313388]: 2025-12-03 18:19:14.068683223 +0000 UTC m=+1.025065312 container attach eb5b78d259f0655482f9851133de2ff5cc7766c118f26d0dabf4d5ba22b225d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chatelet, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:19:14 compute-0 podman[313388]: 2025-12-03 18:19:14.069698789 +0000 UTC m=+1.026080828 container died eb5b78d259f0655482f9851133de2ff5cc7766c118f26d0dabf4d5ba22b225d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chatelet, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:19:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:19:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:19:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:19:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:19:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:19:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:19:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:19:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:19:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c872b7e4cc8f1af483543de344a30c93adc2d6470571541c23f54adb889655fb-merged.mount: Deactivated successfully.
Dec  3 18:19:14 compute-0 podman[313388]: 2025-12-03 18:19:14.176356894 +0000 UTC m=+1.132738873 container remove eb5b78d259f0655482f9851133de2ff5cc7766c118f26d0dabf4d5ba22b225d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chatelet, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:19:14 compute-0 systemd[1]: libpod-conmon-eb5b78d259f0655482f9851133de2ff5cc7766c118f26d0dabf4d5ba22b225d4.scope: Deactivated successfully.
Dec  3 18:19:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:14 compute-0 podman[313633]: 2025-12-03 18:19:14.369826451 +0000 UTC m=+0.058575398 container create 7f1a3dfbe86ad31aaf6ab2a6c78ae4464ff72d2c76447bb13abe9e440d374bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chaplygin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:19:14 compute-0 systemd[1]: Started libpod-conmon-7f1a3dfbe86ad31aaf6ab2a6c78ae4464ff72d2c76447bb13abe9e440d374bbe.scope.
Dec  3 18:19:14 compute-0 podman[313633]: 2025-12-03 18:19:14.347993358 +0000 UTC m=+0.036742375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:19:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5dbf43d6b0b86434883b7a3e8e2a23cecb9108fde1887cef5995dbd820c9cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5dbf43d6b0b86434883b7a3e8e2a23cecb9108fde1887cef5995dbd820c9cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5dbf43d6b0b86434883b7a3e8e2a23cecb9108fde1887cef5995dbd820c9cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5dbf43d6b0b86434883b7a3e8e2a23cecb9108fde1887cef5995dbd820c9cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:14 compute-0 python3.9[313670]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:15 compute-0 podman[313633]: 2025-12-03 18:19:15.228629914 +0000 UTC m=+0.917378931 container init 7f1a3dfbe86ad31aaf6ab2a6c78ae4464ff72d2c76447bb13abe9e440d374bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 18:19:15 compute-0 podman[313633]: 2025-12-03 18:19:15.245165326 +0000 UTC m=+0.933914283 container start 7f1a3dfbe86ad31aaf6ab2a6c78ae4464ff72d2c76447bb13abe9e440d374bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:19:15 compute-0 podman[313633]: 2025-12-03 18:19:15.349886443 +0000 UTC m=+1.038635470 container attach 7f1a3dfbe86ad31aaf6ab2a6c78ae4464ff72d2c76447bb13abe9e440d374bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chaplygin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:19:15 compute-0 python3.9[313829]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]: {
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:    "0": [
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:        {
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "devices": [
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "/dev/loop3"
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            ],
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_name": "ceph_lv0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_size": "21470642176",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "name": "ceph_lv0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "tags": {
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.cluster_name": "ceph",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.crush_device_class": "",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.encrypted": "0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.osd_id": "0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.type": "block",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.vdo": "0"
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            },
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "type": "block",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "vg_name": "ceph_vg0"
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:        }
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:    ],
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:    "1": [
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:        {
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "devices": [
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "/dev/loop4"
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            ],
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_name": "ceph_lv1",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_size": "21470642176",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "name": "ceph_lv1",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "tags": {
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.cluster_name": "ceph",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.crush_device_class": "",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.encrypted": "0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.osd_id": "1",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.type": "block",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.vdo": "0"
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            },
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "type": "block",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "vg_name": "ceph_vg1"
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:        }
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:    ],
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:    "2": [
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:        {
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "devices": [
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "/dev/loop5"
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            ],
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_name": "ceph_lv2",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_size": "21470642176",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "name": "ceph_lv2",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "tags": {
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.cluster_name": "ceph",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.crush_device_class": "",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.encrypted": "0",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.osd_id": "2",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.type": "block",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:                "ceph.vdo": "0"
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            },
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "type": "block",
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:            "vg_name": "ceph_vg2"
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:        }
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]:    ]
Dec  3 18:19:16 compute-0 wonderful_chaplygin[313673]: }
Dec  3 18:19:16 compute-0 systemd[1]: libpod-7f1a3dfbe86ad31aaf6ab2a6c78ae4464ff72d2c76447bb13abe9e440d374bbe.scope: Deactivated successfully.
Dec  3 18:19:16 compute-0 podman[313633]: 2025-12-03 18:19:16.072371011 +0000 UTC m=+1.761119948 container died 7f1a3dfbe86ad31aaf6ab2a6c78ae4464ff72d2c76447bb13abe9e440d374bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chaplygin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:19:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f5dbf43d6b0b86434883b7a3e8e2a23cecb9108fde1887cef5995dbd820c9cd-merged.mount: Deactivated successfully.
Dec  3 18:19:16 compute-0 podman[313633]: 2025-12-03 18:19:16.146426815 +0000 UTC m=+1.835175752 container remove 7f1a3dfbe86ad31aaf6ab2a6c78ae4464ff72d2c76447bb13abe9e440d374bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chaplygin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:19:16 compute-0 systemd[1]: libpod-conmon-7f1a3dfbe86ad31aaf6ab2a6c78ae4464ff72d2c76447bb13abe9e440d374bbe.scope: Deactivated successfully.
Dec  3 18:19:16 compute-0 podman[313883]: 2025-12-03 18:19:16.206845349 +0000 UTC m=+0.116194014 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:19:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:16 compute-0 python3.9[313938]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:17 compute-0 podman[314178]: 2025-12-03 18:19:16.952209327 +0000 UTC m=+0.031370332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:19:17 compute-0 podman[314178]: 2025-12-03 18:19:17.260666517 +0000 UTC m=+0.339827502 container create 3019a6a69e2e138f43d2855e8386b50590da923b9fa43871a0412adf1e7fb646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:19:17 compute-0 python3.9[314249]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:17 compute-0 systemd[1]: Started libpod-conmon-3019a6a69e2e138f43d2855e8386b50590da923b9fa43871a0412adf1e7fb646.scope.
Dec  3 18:19:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:19:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:18 compute-0 python3.9[314332]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:18 compute-0 podman[314178]: 2025-12-03 18:19:18.817752255 +0000 UTC m=+1.896913290 container init 3019a6a69e2e138f43d2855e8386b50590da923b9fa43871a0412adf1e7fb646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_antonelli, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 18:19:18 compute-0 podman[314178]: 2025-12-03 18:19:18.837119067 +0000 UTC m=+1.916280082 container start 3019a6a69e2e138f43d2855e8386b50590da923b9fa43871a0412adf1e7fb646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_antonelli, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:19:18 compute-0 beautiful_antonelli[314254]: 167 167
Dec  3 18:19:18 compute-0 systemd[1]: libpod-3019a6a69e2e138f43d2855e8386b50590da923b9fa43871a0412adf1e7fb646.scope: Deactivated successfully.
Dec  3 18:19:18 compute-0 podman[314178]: 2025-12-03 18:19:18.871207775 +0000 UTC m=+1.950368850 container attach 3019a6a69e2e138f43d2855e8386b50590da923b9fa43871a0412adf1e7fb646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:19:18 compute-0 podman[314178]: 2025-12-03 18:19:18.87259937 +0000 UTC m=+1.951760365 container died 3019a6a69e2e138f43d2855e8386b50590da923b9fa43871a0412adf1e7fb646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 18:19:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-99ffbbedff41edd7b39e3a729ea8cecd689c5e30cef2990b9153712c8817b5f4-merged.mount: Deactivated successfully.
Dec  3 18:19:18 compute-0 podman[314178]: 2025-12-03 18:19:18.955018822 +0000 UTC m=+2.034179797 container remove 3019a6a69e2e138f43d2855e8386b50590da923b9fa43871a0412adf1e7fb646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_antonelli, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:19:18 compute-0 systemd[1]: libpod-conmon-3019a6a69e2e138f43d2855e8386b50590da923b9fa43871a0412adf1e7fb646.scope: Deactivated successfully.
Dec  3 18:19:19 compute-0 podman[314495]: 2025-12-03 18:19:19.158128519 +0000 UTC m=+0.060016975 container create 4f3541d07a53d44778d66b64ee33ade49fba5fd4bacd05986dab6f0bfe37017f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:19:19 compute-0 systemd[1]: Started libpod-conmon-4f3541d07a53d44778d66b64ee33ade49fba5fd4bacd05986dab6f0bfe37017f.scope.
Dec  3 18:19:19 compute-0 podman[314495]: 2025-12-03 18:19:19.137554588 +0000 UTC m=+0.039443014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:19:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9db488c960a5b492a890852d21793fda821bd79f85992c46a460d06202b37eff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9db488c960a5b492a890852d21793fda821bd79f85992c46a460d06202b37eff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9db488c960a5b492a890852d21793fda821bd79f85992c46a460d06202b37eff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9db488c960a5b492a890852d21793fda821bd79f85992c46a460d06202b37eff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:19:19 compute-0 python3.9[314517]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:19 compute-0 podman[314495]: 2025-12-03 18:19:19.315829276 +0000 UTC m=+0.217717662 container init 4f3541d07a53d44778d66b64ee33ade49fba5fd4bacd05986dab6f0bfe37017f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:19:19 compute-0 podman[314495]: 2025-12-03 18:19:19.340797008 +0000 UTC m=+0.242685404 container start 4f3541d07a53d44778d66b64ee33ade49fba5fd4bacd05986dab6f0bfe37017f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:19:19 compute-0 podman[314495]: 2025-12-03 18:19:19.347164606 +0000 UTC m=+0.249052972 container attach 4f3541d07a53d44778d66b64ee33ade49fba5fd4bacd05986dab6f0bfe37017f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:19:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:20 compute-0 python3.9[314615]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:20 compute-0 zealous_mayer[314524]: {
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "osd_id": 1,
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "type": "bluestore"
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:    },
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "osd_id": 2,
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "type": "bluestore"
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:    },
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "osd_id": 0,
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:        "type": "bluestore"
Dec  3 18:19:20 compute-0 zealous_mayer[314524]:    }
Dec  3 18:19:20 compute-0 zealous_mayer[314524]: }
Dec  3 18:19:20 compute-0 systemd[1]: libpod-4f3541d07a53d44778d66b64ee33ade49fba5fd4bacd05986dab6f0bfe37017f.scope: Deactivated successfully.
Dec  3 18:19:20 compute-0 systemd[1]: libpod-4f3541d07a53d44778d66b64ee33ade49fba5fd4bacd05986dab6f0bfe37017f.scope: Consumed 1.068s CPU time.
Dec  3 18:19:20 compute-0 podman[314495]: 2025-12-03 18:19:20.411699101 +0000 UTC m=+1.313587497 container died 4f3541d07a53d44778d66b64ee33ade49fba5fd4bacd05986dab6f0bfe37017f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9db488c960a5b492a890852d21793fda821bd79f85992c46a460d06202b37eff-merged.mount: Deactivated successfully.
Dec  3 18:19:20 compute-0 podman[314495]: 2025-12-03 18:19:20.484261647 +0000 UTC m=+1.386150013 container remove 4f3541d07a53d44778d66b64ee33ade49fba5fd4bacd05986dab6f0bfe37017f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:19:20 compute-0 systemd[1]: libpod-conmon-4f3541d07a53d44778d66b64ee33ade49fba5fd4bacd05986dab6f0bfe37017f.scope: Deactivated successfully.
Dec  3 18:19:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:19:20 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:19:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:19:20 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:19:20 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 820141ec-fd15-4688-ac3f-58d156faf3b2 does not exist
Dec  3 18:19:20 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 532134d5-c594-4789-9f96-3c194131bd9d does not exist
Dec  3 18:19:21 compute-0 python3.9[314848]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:21 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:19:21 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:19:21 compute-0 python3.9[314926]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:22 compute-0 python3.9[315078]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:19:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:19:23.303 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:19:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:19:23.305 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:19:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:19:23.305 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:19:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:19:23 compute-0 podman[315202]: 2025-12-03 18:19:23.93692997 +0000 UTC m=+0.103321132 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:19:24 compute-0 python3.9[315251]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:25 compute-0 python3.9[315403]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:19:26 compute-0 python3.9[315556]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:19:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:27 compute-0 python3.9[315708]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:28 compute-0 python3.9[315860]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:28 compute-0 python3.9[315938]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt.target _original_basename=edpm_libvirt.target recurse=False state=file path=/etc/systemd/system/edpm_libvirt.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:29 compute-0 podman[158200]: time="2025-12-03T18:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:19:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Dec  3 18:19:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7282 "" "Go-http-client/1.1"
Dec  3 18:19:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:30 compute-0 python3.9[316090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:31 compute-0 python3.9[316168]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt_guests.service _original_basename=edpm_libvirt_guests.service recurse=False state=file path=/etc/systemd/system/edpm_libvirt_guests.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:31 compute-0 openstack_network_exporter[160319]: ERROR   18:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:19:31 compute-0 openstack_network_exporter[160319]: ERROR   18:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:19:31 compute-0 openstack_network_exporter[160319]: ERROR   18:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:19:31 compute-0 openstack_network_exporter[160319]: ERROR   18:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:19:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:19:31 compute-0 openstack_network_exporter[160319]: ERROR   18:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:19:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:19:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:32 compute-0 python3.9[316320]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:33 compute-0 python3.9[316398]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/virt-guest-shutdown.target _original_basename=virt-guest-shutdown.target recurse=False state=file path=/etc/systemd/system/virt-guest-shutdown.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:33 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Dec  3 18:19:33 compute-0 systemd[1]: session-55.scope: Consumed 2min 32.462s CPU time.
Dec  3 18:19:33 compute-0 systemd-logind[784]: Session 55 logged out. Waiting for processes to exit.
Dec  3 18:19:33 compute-0 systemd-logind[784]: Removed session 55.
Dec  3 18:19:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:34 compute-0 podman[316423]: 2025-12-03 18:19:34.966997695 +0000 UTC m=+0.105681571 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:19:34 compute-0 podman[316424]: 2025-12-03 18:19:34.980737607 +0000 UTC m=+0.119420873 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container)
Dec  3 18:19:34 compute-0 podman[316428]: 2025-12-03 18:19:34.981190188 +0000 UTC m=+0.112127572 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=)
Dec  3 18:19:34 compute-0 podman[316426]: 2025-12-03 18:19:34.988594303 +0000 UTC m=+0.105911598 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  3 18:19:35 compute-0 podman[316427]: 2025-12-03 18:19:35.005161265 +0000 UTC m=+0.135845403 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:19:35 compute-0 podman[316425]: 2025-12-03 18:19:35.027706487 +0000 UTC m=+0.166164918 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:19:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:39 compute-0 systemd-logind[784]: New session 56 of user zuul.
Dec  3 18:19:39 compute-0 systemd[1]: Started Session 56 of User zuul.
Dec  3 18:19:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:41 compute-0 python3.9[316695]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:19:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:43 compute-0 python3.9[316849]: ansible-ansible.builtin.service_facts Invoked
Dec  3 18:19:43 compute-0 network[316866]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 18:19:43 compute-0 network[316867]: 'network-scripts' will be removed from distribution in near future.
Dec  3 18:19:43 compute-0 network[316868]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 18:19:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:19:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:19:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:19:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:19:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:19:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:19:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:46 compute-0 podman[316972]: 2025-12-03 18:19:46.331885664 +0000 UTC m=+0.076115706 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:19:47 compute-0 python3.9[317164]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 18:19:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:48 compute-0 python3.9[317248]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 18:19:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:51 compute-0 python3.9[317402]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:19:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:52 compute-0 python3.9[317554]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:19:54 compute-0 podman[317679]: 2025-12-03 18:19:54.208331313 +0000 UTC m=+0.073751029 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:19:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:54 compute-0 python3.9[317726]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:19:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:19:55 compute-0 python3.9[317878]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:19:56 compute-0 python3.9[318031]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:19:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:57 compute-0 python3.9[318154]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764785995.6403873-95-112460772433442/.source.iscsi _original_basename=.zg5lhj_5 follow=False checksum=d1c0a42b2394a1ef79f9ae4fa77adb920153541e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:19:58 compute-0 python3.9[318306]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:59 compute-0 python3.9[318458]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:19:59 compute-0 podman[158200]: time="2025-12-03T18:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:19:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Dec  3 18:19:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7283 "" "Go-http-client/1.1"
Dec  3 18:19:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:00 compute-0 python3.9[318610]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:20:01 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  3 18:20:01 compute-0 openstack_network_exporter[160319]: ERROR   18:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:20:01 compute-0 openstack_network_exporter[160319]: ERROR   18:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:20:01 compute-0 openstack_network_exporter[160319]: ERROR   18:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:20:01 compute-0 openstack_network_exporter[160319]: ERROR   18:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:20:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:20:01 compute-0 openstack_network_exporter[160319]: ERROR   18:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:20:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:20:02 compute-0 python3.9[318766]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:20:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:02 compute-0 systemd[1]: Reloading.
Dec  3 18:20:02 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:20:02 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:20:02 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  3 18:20:02 compute-0 systemd[1]: Starting Open-iSCSI...
Dec  3 18:20:02 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec  3 18:20:02 compute-0 systemd[1]: Started Open-iSCSI.
Dec  3 18:20:02 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  3 18:20:02 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  3 18:20:04 compute-0 python3.9[318967]: ansible-ansible.builtin.service_facts Invoked
Dec  3 18:20:04 compute-0 network[318984]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 18:20:04 compute-0 network[318985]: 'network-scripts' will be removed from distribution in near future.
Dec  3 18:20:04 compute-0 network[318986]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 18:20:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:05 compute-0 podman[318993]: 2025-12-03 18:20:05.219119734 +0000 UTC m=+0.127283557 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 18:20:05 compute-0 podman[319004]: 2025-12-03 18:20:05.228708266 +0000 UTC m=+0.104135276 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 18:20:05 compute-0 podman[318996]: 2025-12-03 18:20:05.236277079 +0000 UTC m=+0.122911481 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 18:20:05 compute-0 podman[319003]: 2025-12-03 18:20:05.245884933 +0000 UTC m=+0.124504770 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:20:05 compute-0 podman[318994]: 2025-12-03 18:20:05.254350748 +0000 UTC m=+0.143523951 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc.)
Dec  3 18:20:05 compute-0 podman[318995]: 2025-12-03 18:20:05.286322823 +0000 UTC m=+0.174779238 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:20:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:09 compute-0 python3.9[319376]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  3 18:20:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:10 compute-0 python3.9[319528]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  3 18:20:11 compute-0 python3.9[319684]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:20:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:13 compute-0 python3.9[319807]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764786011.1547189-172-159765105204618/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:20:13
Dec  3 18:20:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:20:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:20:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', 'volumes', 'default.rgw.control', 'default.rgw.log']
Dec  3 18:20:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:20:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:20:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:20:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:20:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:20:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:20:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:20:14 compute-0 python3.9[319959]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:20:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:20:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:20:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:20:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:20:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:20:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:20:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:20:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:20:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:20:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:15 compute-0 python3.9[320111]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 18:20:15 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  3 18:20:15 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec  3 18:20:15 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec  3 18:20:15 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec  3 18:20:15 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec  3 18:20:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:16 compute-0 podman[320239]: 2025-12-03 18:20:16.701947598 +0000 UTC m=+0.069541117 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:20:16 compute-0 python3.9[320291]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:20:17 compute-0 python3.9[320443]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:20:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:18 compute-0 python3.9[320595]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:20:19 compute-0 python3.9[320748]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:20:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:20 compute-0 python3.9[320871]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764786019.1297152-230-133106173263169/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:21 compute-0 python3.9[321123]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:20:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:20:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:20:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:20:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:20:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:20:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:20:21 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 2ee62a2d-ea86-4e4e-ad04-56988ad6d2b6 does not exist
Dec  3 18:20:21 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 31eab8f2-dbc9-4c5c-a7e2-1c214d0ec559 does not exist
Dec  3 18:20:21 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 47674875-ea07-49fd-8d43-a91358072b22 does not exist
Dec  3 18:20:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:20:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:20:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:20:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:20:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:20:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:20:21 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:20:21 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:20:21 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:20:22 compute-0 python3.9[321379]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:22 compute-0 podman[321489]: 2025-12-03 18:20:22.441409366 +0000 UTC m=+0.055440125 container create 8d57a98eda49cd37033ddacc1280ec22d0f1e30a40f55189ebdfca3d88210127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:20:22 compute-0 systemd[1]: Started libpod-conmon-8d57a98eda49cd37033ddacc1280ec22d0f1e30a40f55189ebdfca3d88210127.scope.
Dec  3 18:20:22 compute-0 podman[321489]: 2025-12-03 18:20:22.418517371 +0000 UTC m=+0.032548160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:20:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:20:22 compute-0 podman[321489]: 2025-12-03 18:20:22.548350188 +0000 UTC m=+0.162381037 container init 8d57a98eda49cd37033ddacc1280ec22d0f1e30a40f55189ebdfca3d88210127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hermann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:20:22 compute-0 podman[321489]: 2025-12-03 18:20:22.56160348 +0000 UTC m=+0.175634279 container start 8d57a98eda49cd37033ddacc1280ec22d0f1e30a40f55189ebdfca3d88210127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 18:20:22 compute-0 podman[321489]: 2025-12-03 18:20:22.570858384 +0000 UTC m=+0.184889183 container attach 8d57a98eda49cd37033ddacc1280ec22d0f1e30a40f55189ebdfca3d88210127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Dec  3 18:20:22 compute-0 objective_hermann[321534]: 167 167
Dec  3 18:20:22 compute-0 systemd[1]: libpod-8d57a98eda49cd37033ddacc1280ec22d0f1e30a40f55189ebdfca3d88210127.scope: Deactivated successfully.
Dec  3 18:20:22 compute-0 podman[321489]: 2025-12-03 18:20:22.577078275 +0000 UTC m=+0.191109144 container died 8d57a98eda49cd37033ddacc1280ec22d0f1e30a40f55189ebdfca3d88210127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-106eda41a003f389bdaa5224d093066d7497282c9c70d2da229a063416c8e481-merged.mount: Deactivated successfully.
Dec  3 18:20:22 compute-0 podman[321489]: 2025-12-03 18:20:22.658912428 +0000 UTC m=+0.272943227 container remove 8d57a98eda49cd37033ddacc1280ec22d0f1e30a40f55189ebdfca3d88210127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hermann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 18:20:22 compute-0 systemd[1]: libpod-conmon-8d57a98eda49cd37033ddacc1280ec22d0f1e30a40f55189ebdfca3d88210127.scope: Deactivated successfully.
Dec  3 18:20:22 compute-0 podman[321579]: 2025-12-03 18:20:22.881124206 +0000 UTC m=+0.058077809 container create a71b901171deac2f2469a3409a608b7951e1a279550b7bece2b12e84f32c1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gagarin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 18:20:22 compute-0 systemd[1]: Started libpod-conmon-a71b901171deac2f2469a3409a608b7951e1a279550b7bece2b12e84f32c1e3d.scope.
Dec  3 18:20:22 compute-0 podman[321579]: 2025-12-03 18:20:22.861307886 +0000 UTC m=+0.038261499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:20:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f135afa92d2967612260012ac3a930d2ca023b8ab35bcbc9a15567d7ffa23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f135afa92d2967612260012ac3a930d2ca023b8ab35bcbc9a15567d7ffa23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f135afa92d2967612260012ac3a930d2ca023b8ab35bcbc9a15567d7ffa23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f135afa92d2967612260012ac3a930d2ca023b8ab35bcbc9a15567d7ffa23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f135afa92d2967612260012ac3a930d2ca023b8ab35bcbc9a15567d7ffa23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:23 compute-0 podman[321579]: 2025-12-03 18:20:23.031883851 +0000 UTC m=+0.208837484 container init a71b901171deac2f2469a3409a608b7951e1a279550b7bece2b12e84f32c1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gagarin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 18:20:23 compute-0 podman[321579]: 2025-12-03 18:20:23.042870078 +0000 UTC m=+0.219823691 container start a71b901171deac2f2469a3409a608b7951e1a279550b7bece2b12e84f32c1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:20:23 compute-0 podman[321579]: 2025-12-03 18:20:23.049223631 +0000 UTC m=+0.226177224 container attach a71b901171deac2f2469a3409a608b7951e1a279550b7bece2b12e84f32c1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 18:20:23 compute-0 python3.9[321651]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:20:23.306 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:20:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:20:23.307 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:20:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:20:23.307 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:20:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:20:24 compute-0 python3.9[321816]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:24 compute-0 distracted_gagarin[321630]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:20:24 compute-0 distracted_gagarin[321630]: --> relative data size: 1.0
Dec  3 18:20:24 compute-0 distracted_gagarin[321630]: --> All data devices are unavailable
Dec  3 18:20:24 compute-0 systemd[1]: libpod-a71b901171deac2f2469a3409a608b7951e1a279550b7bece2b12e84f32c1e3d.scope: Deactivated successfully.
Dec  3 18:20:24 compute-0 podman[321579]: 2025-12-03 18:20:24.167600075 +0000 UTC m=+1.344553688 container died a71b901171deac2f2469a3409a608b7951e1a279550b7bece2b12e84f32c1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gagarin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:20:24 compute-0 systemd[1]: libpod-a71b901171deac2f2469a3409a608b7951e1a279550b7bece2b12e84f32c1e3d.scope: Consumed 1.052s CPU time.
Dec  3 18:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c1f135afa92d2967612260012ac3a930d2ca023b8ab35bcbc9a15567d7ffa23-merged.mount: Deactivated successfully.
Dec  3 18:20:24 compute-0 podman[321579]: 2025-12-03 18:20:24.255418094 +0000 UTC m=+1.432371697 container remove a71b901171deac2f2469a3409a608b7951e1a279550b7bece2b12e84f32c1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 18:20:24 compute-0 systemd[1]: libpod-conmon-a71b901171deac2f2469a3409a608b7951e1a279550b7bece2b12e84f32c1e3d.scope: Deactivated successfully.
Dec  3 18:20:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:24 compute-0 podman[321840]: 2025-12-03 18:20:24.391339339 +0000 UTC m=+0.081838476 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 18:20:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:25 compute-0 podman[322070]: 2025-12-03 18:20:25.065991525 +0000 UTC m=+0.051600112 container create b43806c4328730a00b7f3fc0ccae9e0855b6d8fb286c11712601dc3ce168bd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:20:25 compute-0 systemd[1]: Started libpod-conmon-b43806c4328730a00b7f3fc0ccae9e0855b6d8fb286c11712601dc3ce168bd3c.scope.
Dec  3 18:20:25 compute-0 podman[322070]: 2025-12-03 18:20:25.046516003 +0000 UTC m=+0.032124620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:20:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:20:25 compute-0 podman[322070]: 2025-12-03 18:20:25.178308228 +0000 UTC m=+0.163916845 container init b43806c4328730a00b7f3fc0ccae9e0855b6d8fb286c11712601dc3ce168bd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:20:25 compute-0 podman[322070]: 2025-12-03 18:20:25.189624392 +0000 UTC m=+0.175232989 container start b43806c4328730a00b7f3fc0ccae9e0855b6d8fb286c11712601dc3ce168bd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:20:25 compute-0 podman[322070]: 2025-12-03 18:20:25.193980888 +0000 UTC m=+0.179589515 container attach b43806c4328730a00b7f3fc0ccae9e0855b6d8fb286c11712601dc3ce168bd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:20:25 compute-0 magical_bell[322111]: 167 167
Dec  3 18:20:25 compute-0 podman[322070]: 2025-12-03 18:20:25.199588194 +0000 UTC m=+0.185196791 container died b43806c4328730a00b7f3fc0ccae9e0855b6d8fb286c11712601dc3ce168bd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:20:25 compute-0 systemd[1]: libpod-b43806c4328730a00b7f3fc0ccae9e0855b6d8fb286c11712601dc3ce168bd3c.scope: Deactivated successfully.
Dec  3 18:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-482639644c1ee79202acb47ef43b04de4ff8b214f9a0464bd985b22c3a0ec775-merged.mount: Deactivated successfully.
Dec  3 18:20:25 compute-0 podman[322070]: 2025-12-03 18:20:25.280765382 +0000 UTC m=+0.266373999 container remove b43806c4328730a00b7f3fc0ccae9e0855b6d8fb286c11712601dc3ce168bd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 18:20:25 compute-0 systemd[1]: libpod-conmon-b43806c4328730a00b7f3fc0ccae9e0855b6d8fb286c11712601dc3ce168bd3c.scope: Deactivated successfully.
Dec  3 18:20:25 compute-0 podman[322184]: 2025-12-03 18:20:25.514510509 +0000 UTC m=+0.059167215 container create ec7b61ecb1cd31a6b7c2c2f7deeccafcc72ff5d3fe400918529c08dcd8d0fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:20:25 compute-0 python3.9[322178]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:25 compute-0 podman[322184]: 2025-12-03 18:20:25.491809839 +0000 UTC m=+0.036466575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:20:25 compute-0 systemd[1]: Started libpod-conmon-ec7b61ecb1cd31a6b7c2c2f7deeccafcc72ff5d3fe400918529c08dcd8d0fc0d.scope.
Dec  3 18:20:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049e161547eddc91cb2c8c3f3bb3ef939d49a525dde12ffb2ef44a6a914f3e08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049e161547eddc91cb2c8c3f3bb3ef939d49a525dde12ffb2ef44a6a914f3e08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049e161547eddc91cb2c8c3f3bb3ef939d49a525dde12ffb2ef44a6a914f3e08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049e161547eddc91cb2c8c3f3bb3ef939d49a525dde12ffb2ef44a6a914f3e08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:25 compute-0 podman[322184]: 2025-12-03 18:20:25.660104309 +0000 UTC m=+0.204761025 container init ec7b61ecb1cd31a6b7c2c2f7deeccafcc72ff5d3fe400918529c08dcd8d0fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:20:25 compute-0 podman[322184]: 2025-12-03 18:20:25.669305322 +0000 UTC m=+0.213962028 container start ec7b61ecb1cd31a6b7c2c2f7deeccafcc72ff5d3fe400918529c08dcd8d0fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:20:25 compute-0 podman[322184]: 2025-12-03 18:20:25.673554605 +0000 UTC m=+0.218211311 container attach ec7b61ecb1cd31a6b7c2c2f7deeccafcc72ff5d3fe400918529c08dcd8d0fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:20:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:26 compute-0 python3.9[322357]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:26 compute-0 jolly_austin[322201]: {
Dec  3 18:20:26 compute-0 jolly_austin[322201]:    "0": [
Dec  3 18:20:26 compute-0 jolly_austin[322201]:        {
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "devices": [
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "/dev/loop3"
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            ],
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_name": "ceph_lv0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_size": "21470642176",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "name": "ceph_lv0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "tags": {
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.cluster_name": "ceph",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.crush_device_class": "",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.encrypted": "0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.osd_id": "0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.type": "block",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.vdo": "0"
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            },
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "type": "block",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "vg_name": "ceph_vg0"
Dec  3 18:20:26 compute-0 jolly_austin[322201]:        }
Dec  3 18:20:26 compute-0 jolly_austin[322201]:    ],
Dec  3 18:20:26 compute-0 jolly_austin[322201]:    "1": [
Dec  3 18:20:26 compute-0 jolly_austin[322201]:        {
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "devices": [
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "/dev/loop4"
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            ],
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_name": "ceph_lv1",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_size": "21470642176",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "name": "ceph_lv1",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "tags": {
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.cluster_name": "ceph",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.crush_device_class": "",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.encrypted": "0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.osd_id": "1",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.type": "block",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.vdo": "0"
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            },
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "type": "block",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "vg_name": "ceph_vg1"
Dec  3 18:20:26 compute-0 jolly_austin[322201]:        }
Dec  3 18:20:26 compute-0 jolly_austin[322201]:    ],
Dec  3 18:20:26 compute-0 jolly_austin[322201]:    "2": [
Dec  3 18:20:26 compute-0 jolly_austin[322201]:        {
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "devices": [
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "/dev/loop5"
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            ],
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_name": "ceph_lv2",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_size": "21470642176",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "name": "ceph_lv2",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "tags": {
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.cluster_name": "ceph",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.crush_device_class": "",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.encrypted": "0",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.osd_id": "2",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.type": "block",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:                "ceph.vdo": "0"
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            },
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "type": "block",
Dec  3 18:20:26 compute-0 jolly_austin[322201]:            "vg_name": "ceph_vg2"
Dec  3 18:20:26 compute-0 jolly_austin[322201]:        }
Dec  3 18:20:26 compute-0 jolly_austin[322201]:    ]
Dec  3 18:20:26 compute-0 jolly_austin[322201]: }
Dec  3 18:20:26 compute-0 systemd[1]: libpod-ec7b61ecb1cd31a6b7c2c2f7deeccafcc72ff5d3fe400918529c08dcd8d0fc0d.scope: Deactivated successfully.
Dec  3 18:20:26 compute-0 podman[322184]: 2025-12-03 18:20:26.553218961 +0000 UTC m=+1.097875687 container died ec7b61ecb1cd31a6b7c2c2f7deeccafcc72ff5d3fe400918529c08dcd8d0fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-049e161547eddc91cb2c8c3f3bb3ef939d49a525dde12ffb2ef44a6a914f3e08-merged.mount: Deactivated successfully.
Dec  3 18:20:26 compute-0 podman[322184]: 2025-12-03 18:20:26.630989656 +0000 UTC m=+1.175646362 container remove ec7b61ecb1cd31a6b7c2c2f7deeccafcc72ff5d3fe400918529c08dcd8d0fc0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:20:26 compute-0 systemd[1]: libpod-conmon-ec7b61ecb1cd31a6b7c2c2f7deeccafcc72ff5d3fe400918529c08dcd8d0fc0d.scope: Deactivated successfully.
Dec  3 18:20:27 compute-0 podman[322665]: 2025-12-03 18:20:27.62234236 +0000 UTC m=+0.061771139 container create 035e68fe7e7ef34860aa00b63f560926ce0bbf1b6410490a11f76f2d390b6e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:20:27 compute-0 systemd[1]: Started libpod-conmon-035e68fe7e7ef34860aa00b63f560926ce0bbf1b6410490a11f76f2d390b6e9e.scope.
Dec  3 18:20:27 compute-0 podman[322665]: 2025-12-03 18:20:27.596258418 +0000 UTC m=+0.035687237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:20:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:20:27 compute-0 podman[322665]: 2025-12-03 18:20:27.730876102 +0000 UTC m=+0.170304911 container init 035e68fe7e7ef34860aa00b63f560926ce0bbf1b6410490a11f76f2d390b6e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_black, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:20:27 compute-0 python3.9[322673]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:27 compute-0 podman[322665]: 2025-12-03 18:20:27.746423818 +0000 UTC m=+0.185852637 container start 035e68fe7e7ef34860aa00b63f560926ce0bbf1b6410490a11f76f2d390b6e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:20:27 compute-0 podman[322665]: 2025-12-03 18:20:27.752606398 +0000 UTC m=+0.192035207 container attach 035e68fe7e7ef34860aa00b63f560926ce0bbf1b6410490a11f76f2d390b6e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_black, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:20:27 compute-0 silly_black[322683]: 167 167
Dec  3 18:20:27 compute-0 systemd[1]: libpod-035e68fe7e7ef34860aa00b63f560926ce0bbf1b6410490a11f76f2d390b6e9e.scope: Deactivated successfully.
Dec  3 18:20:27 compute-0 podman[322665]: 2025-12-03 18:20:27.756142124 +0000 UTC m=+0.195570903 container died 035e68fe7e7ef34860aa00b63f560926ce0bbf1b6410490a11f76f2d390b6e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:20:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-abfcfc5a88e8666a5e043d29c5cf06669c5a552d92ea06287e641bca8e171dfc-merged.mount: Deactivated successfully.
Dec  3 18:20:27 compute-0 podman[322665]: 2025-12-03 18:20:27.814992361 +0000 UTC m=+0.254421140 container remove 035e68fe7e7ef34860aa00b63f560926ce0bbf1b6410490a11f76f2d390b6e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:20:27 compute-0 systemd[1]: libpod-conmon-035e68fe7e7ef34860aa00b63f560926ce0bbf1b6410490a11f76f2d390b6e9e.scope: Deactivated successfully.
Dec  3 18:20:28 compute-0 podman[322750]: 2025-12-03 18:20:28.044585907 +0000 UTC m=+0.079025026 container create 3ee4c6c55a8b0b8116fd2c0187893ae259c37ae1ff79c3becce3fd85f0c0cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:20:28 compute-0 podman[322750]: 2025-12-03 18:20:28.018245538 +0000 UTC m=+0.052684697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:20:28 compute-0 systemd[1]: Started libpod-conmon-3ee4c6c55a8b0b8116fd2c0187893ae259c37ae1ff79c3becce3fd85f0c0cd38.scope.
Dec  3 18:20:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7faef5604b71766f01938d7cda750e8ded3305fa4e3ecab6968ccc6386ceec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7faef5604b71766f01938d7cda750e8ded3305fa4e3ecab6968ccc6386ceec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7faef5604b71766f01938d7cda750e8ded3305fa4e3ecab6968ccc6386ceec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7faef5604b71766f01938d7cda750e8ded3305fa4e3ecab6968ccc6386ceec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:20:28 compute-0 podman[322750]: 2025-12-03 18:20:28.193290462 +0000 UTC m=+0.227729621 container init 3ee4c6c55a8b0b8116fd2c0187893ae259c37ae1ff79c3becce3fd85f0c0cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poitras, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:20:28 compute-0 podman[322750]: 2025-12-03 18:20:28.213925822 +0000 UTC m=+0.248364981 container start 3ee4c6c55a8b0b8116fd2c0187893ae259c37ae1ff79c3becce3fd85f0c0cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:20:28 compute-0 podman[322750]: 2025-12-03 18:20:28.221544737 +0000 UTC m=+0.255983886 container attach 3ee4c6c55a8b0b8116fd2c0187893ae259c37ae1ff79c3becce3fd85f0c0cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poitras, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 18:20:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:28 compute-0 python3.9[322877]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:29 compute-0 funny_poitras[322797]: {
Dec  3 18:20:29 compute-0 funny_poitras[322797]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "osd_id": 1,
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "type": "bluestore"
Dec  3 18:20:29 compute-0 funny_poitras[322797]:    },
Dec  3 18:20:29 compute-0 funny_poitras[322797]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "osd_id": 2,
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "type": "bluestore"
Dec  3 18:20:29 compute-0 funny_poitras[322797]:    },
Dec  3 18:20:29 compute-0 funny_poitras[322797]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "osd_id": 0,
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:20:29 compute-0 funny_poitras[322797]:        "type": "bluestore"
Dec  3 18:20:29 compute-0 funny_poitras[322797]:    }
Dec  3 18:20:29 compute-0 funny_poitras[322797]: }
Dec  3 18:20:29 compute-0 systemd[1]: libpod-3ee4c6c55a8b0b8116fd2c0187893ae259c37ae1ff79c3becce3fd85f0c0cd38.scope: Deactivated successfully.
Dec  3 18:20:29 compute-0 podman[322750]: 2025-12-03 18:20:29.394154026 +0000 UTC m=+1.428593155 container died 3ee4c6c55a8b0b8116fd2c0187893ae259c37ae1ff79c3becce3fd85f0c0cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:20:29 compute-0 systemd[1]: libpod-3ee4c6c55a8b0b8116fd2c0187893ae259c37ae1ff79c3becce3fd85f0c0cd38.scope: Consumed 1.177s CPU time.
Dec  3 18:20:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f7faef5604b71766f01938d7cda750e8ded3305fa4e3ecab6968ccc6386ceec-merged.mount: Deactivated successfully.
Dec  3 18:20:29 compute-0 podman[322750]: 2025-12-03 18:20:29.492899359 +0000 UTC m=+1.527338468 container remove 3ee4c6c55a8b0b8116fd2c0187893ae259c37ae1ff79c3becce3fd85f0c0cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:20:29 compute-0 systemd[1]: libpod-conmon-3ee4c6c55a8b0b8116fd2c0187893ae259c37ae1ff79c3becce3fd85f0c0cd38.scope: Deactivated successfully.
Dec  3 18:20:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:20:29 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:20:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:20:29 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:20:29 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev fc23b0a7-5b56-48cc-8701-e18986b24097 does not exist
Dec  3 18:20:29 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 7b50d237-d6ed-49ab-9052-412cf15b5a4d does not exist
Dec  3 18:20:29 compute-0 podman[158200]: time="2025-12-03T18:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:20:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Dec  3 18:20:29 compute-0 python3.9[323069]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:20:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7287 "" "Go-http-client/1.1"
Dec  3 18:20:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:29.961911) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786029961971, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1801, "num_deletes": 250, "total_data_size": 3047015, "memory_usage": 3089584, "flush_reason": "Manual Compaction"}
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786029979150, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1726128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11748, "largest_seqno": 13548, "table_properties": {"data_size": 1720201, "index_size": 3002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14676, "raw_average_key_size": 20, "raw_value_size": 1707193, "raw_average_value_size": 2338, "num_data_blocks": 139, "num_entries": 730, "num_filter_entries": 730, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764785825, "oldest_key_time": 1764785825, "file_creation_time": 1764786029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 17340 microseconds, and 7568 cpu microseconds.
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:29.979245) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1726128 bytes OK
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:29.979273) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:29.981821) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:29.981837) EVENT_LOG_v1 {"time_micros": 1764786029981831, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:29.981855) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3039390, prev total WAL file size 3039390, number of live WAL files 2.
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:29.983425) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1685KB)], [29(7705KB)]
Dec  3 18:20:29 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786029983596, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9616678, "oldest_snapshot_seqno": -1}
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4011 keys, 7578811 bytes, temperature: kUnknown
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786030063954, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7578811, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7550128, "index_size": 17565, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95442, "raw_average_key_size": 23, "raw_value_size": 7475946, "raw_average_value_size": 1863, "num_data_blocks": 765, "num_entries": 4011, "num_filter_entries": 4011, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764786029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:30.064179) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7578811 bytes
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:30.066659) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.6 rd, 94.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.0) write-amplify(4.4) OK, records in: 4427, records dropped: 416 output_compression: NoCompression
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:30.066681) EVENT_LOG_v1 {"time_micros": 1764786030066670, "job": 12, "event": "compaction_finished", "compaction_time_micros": 80419, "compaction_time_cpu_micros": 40041, "output_level": 6, "num_output_files": 1, "total_output_size": 7578811, "num_input_records": 4427, "num_output_records": 4011, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786030067156, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786030068996, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:29.983069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:30.069155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:30.069160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:30.069162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:30.069163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:20:30 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:20:30.069165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:20:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:20:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:20:30 compute-0 python3.9[323273]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:31 compute-0 openstack_network_exporter[160319]: ERROR   18:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:20:31 compute-0 openstack_network_exporter[160319]: ERROR   18:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:20:31 compute-0 openstack_network_exporter[160319]: ERROR   18:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:20:31 compute-0 openstack_network_exporter[160319]: ERROR   18:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:20:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:20:31 compute-0 openstack_network_exporter[160319]: ERROR   18:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:20:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:20:31 compute-0 python3.9[323425]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:20:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:32 compute-0 python3.9[323577]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:20:33 compute-0 python3.9[323655]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:20:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:34 compute-0 python3.9[323807]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:20:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:35 compute-0 python3.9[323885]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:20:35 compute-0 podman[324018]: 2025-12-03 18:20:35.744241994 +0000 UTC m=+0.103315386 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:20:35 compute-0 podman[324010]: 2025-12-03 18:20:35.763309627 +0000 UTC m=+0.135343373 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec  3 18:20:35 compute-0 podman[324020]: 2025-12-03 18:20:35.767315434 +0000 UTC m=+0.132074174 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, version=9.4, io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 18:20:35 compute-0 podman[324014]: 2025-12-03 18:20:35.768124353 +0000 UTC m=+0.134951562 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-type=git)
Dec  3 18:20:35 compute-0 podman[324017]: 2025-12-03 18:20:35.769588359 +0000 UTC m=+0.142166808 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  3 18:20:35 compute-0 podman[324015]: 2025-12-03 18:20:35.771763882 +0000 UTC m=+0.131500539 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 18:20:35 compute-0 python3.9[324124]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:37 compute-0 python3.9[324304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:20:38 compute-0 python3.9[324382]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:39 compute-0 python3.9[324534]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:20:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:40 compute-0 python3.9[324612]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:41 compute-0 python3.9[324764]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:20:41 compute-0 systemd[1]: Reloading.
Dec  3 18:20:41 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:20:41 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:20:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:43 compute-0 python3.9[324954]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:20:43 compute-0 python3.9[325032]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:20:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:20:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:20:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:20:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:20:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:20:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:44 compute-0 python3.9[325184]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:20:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:45 compute-0 python3.9[325262]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:46 compute-0 python3.9[325414]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:20:46 compute-0 systemd[1]: Reloading.
Dec  3 18:20:46 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:20:46 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:20:47 compute-0 systemd[1]: Starting Create netns directory...
Dec  3 18:20:47 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 18:20:47 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 18:20:47 compute-0 systemd[1]: Finished Create netns directory.
Dec  3 18:20:47 compute-0 podman[325452]: 2025-12-03 18:20:47.234765647 +0000 UTC m=+0.116508295 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:20:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:48 compute-0 python3.9[325632]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:20:49 compute-0 python3.9[325785]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:20:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:51 compute-0 python3.9[325908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764786049.1608722-437-233479409819253/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:20:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:52 compute-0 python3.9[326060]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:20:53 compute-0 python3.9[326212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:20:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:54 compute-0 python3.9[326335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764786052.9534483-462-219444241645093/.source.json _original_basename=.dzju1vtu follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:54 compute-0 podman[326435]: 2025-12-03 18:20:54.946270893 +0000 UTC m=+0.114467097 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  3 18:20:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:20:55 compute-0 python3.9[326505]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:20:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:20:58 compute-0 python3.9[326932]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  3 18:20:59 compute-0 python3.9[327084]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 18:20:59 compute-0 podman[158200]: time="2025-12-03T18:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:20:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Dec  3 18:20:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7279 "" "Go-http-client/1.1"
Dec  3 18:20:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:01 compute-0 python3.9[327236]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  3 18:21:01 compute-0 openstack_network_exporter[160319]: ERROR   18:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:21:01 compute-0 openstack_network_exporter[160319]: ERROR   18:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:21:01 compute-0 openstack_network_exporter[160319]: ERROR   18:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:21:01 compute-0 openstack_network_exporter[160319]: ERROR   18:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:21:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:21:01 compute-0 openstack_network_exporter[160319]: ERROR   18:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:21:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:21:02 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 18:21:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.703 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.704 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.720 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.720 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.721 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.721 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.722 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.722 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.724 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.723 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.725 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.724 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.725 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.726 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.727 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.727 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.728 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.728 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.729 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.729 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.730 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:21:03.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:21:04 compute-0 python3[327416]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 18:21:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:05 compute-0 podman[327428]: 2025-12-03 18:21:05.82531111 +0000 UTC m=+1.527050722 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  3 18:21:05 compute-0 podman[327466]: 2025-12-03 18:21:05.964282359 +0000 UTC m=+0.110104801 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:21:05 compute-0 podman[327471]: 2025-12-03 18:21:05.970956691 +0000 UTC m=+0.106658368 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:21:05 compute-0 podman[327463]: 2025-12-03 18:21:05.979520359 +0000 UTC m=+0.131412068 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  3 18:21:05 compute-0 podman[327492]: 2025-12-03 18:21:05.993133838 +0000 UTC m=+0.118036972 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc.)
Dec  3 18:21:05 compute-0 podman[327464]: 2025-12-03 18:21:05.994267676 +0000 UTC m=+0.146684268 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public)
Dec  3 18:21:06 compute-0 podman[327465]: 2025-12-03 18:21:06.000896727 +0000 UTC m=+0.153641337 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 18:21:06 compute-0 podman[327595]: 2025-12-03 18:21:06.033719612 +0000 UTC m=+0.058842118 container create 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 18:21:06 compute-0 podman[327595]: 2025-12-03 18:21:06.005905357 +0000 UTC m=+0.031027883 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  3 18:21:06 compute-0 python3[327416]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  3 18:21:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:07 compute-0 python3.9[327784]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:21:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:08 compute-0 python3.9[327938]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:08 compute-0 python3.9[328014]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:21:09 compute-0 python3.9[328165]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764786069.062114-550-200272867719484/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:10 compute-0 python3.9[328241]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 18:21:10 compute-0 systemd[1]: Reloading.
Dec  3 18:21:10 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:21:10 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:21:11 compute-0 python3.9[328351]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:21:11 compute-0 systemd[1]: Reloading.
Dec  3 18:21:12 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:21:12 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:21:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:12 compute-0 systemd[1]: Starting multipathd container...
Dec  3 18:21:12 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb2c3fb1874980b91c83b0f82daf5aadc1d518273081a0bb5186f27d6fd47cf/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb2c3fb1874980b91c83b0f82daf5aadc1d518273081a0bb5186f27d6fd47cf/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:12 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7.
Dec  3 18:21:12 compute-0 podman[328391]: 2025-12-03 18:21:12.725666199 +0000 UTC m=+0.209649644 container init 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:21:12 compute-0 multipathd[328404]: + sudo -E kolla_set_configs
Dec  3 18:21:12 compute-0 podman[328391]: 2025-12-03 18:21:12.763578328 +0000 UTC m=+0.247561683 container start 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:21:12 compute-0 podman[328391]: multipathd
Dec  3 18:21:12 compute-0 systemd[1]: Started multipathd container.
Dec  3 18:21:12 compute-0 multipathd[328404]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 18:21:12 compute-0 multipathd[328404]: INFO:__main__:Validating config file
Dec  3 18:21:12 compute-0 multipathd[328404]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 18:21:12 compute-0 multipathd[328404]: INFO:__main__:Writing out command to execute
Dec  3 18:21:12 compute-0 multipathd[328404]: ++ cat /run_command
Dec  3 18:21:12 compute-0 multipathd[328404]: + CMD='/usr/sbin/multipathd -d'
Dec  3 18:21:12 compute-0 multipathd[328404]: + ARGS=
Dec  3 18:21:12 compute-0 multipathd[328404]: + sudo kolla_copy_cacerts
Dec  3 18:21:12 compute-0 podman[328411]: 2025-12-03 18:21:12.875716027 +0000 UTC m=+0.097738111 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 18:21:12 compute-0 systemd[1]: 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7-723de04243caa5b9.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 18:21:12 compute-0 systemd[1]: 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7-723de04243caa5b9.service: Failed with result 'exit-code'.
Dec  3 18:21:12 compute-0 multipathd[328404]: + [[ ! -n '' ]]
Dec  3 18:21:12 compute-0 multipathd[328404]: + . kolla_extend_start
Dec  3 18:21:12 compute-0 multipathd[328404]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  3 18:21:12 compute-0 multipathd[328404]: Running command: '/usr/sbin/multipathd -d'
Dec  3 18:21:12 compute-0 multipathd[328404]: + umask 0022
Dec  3 18:21:12 compute-0 multipathd[328404]: + exec /usr/sbin/multipathd -d
Dec  3 18:21:12 compute-0 multipathd[328404]: 4352.905104 | --------start up--------
Dec  3 18:21:12 compute-0 multipathd[328404]: 4352.905147 | read /etc/multipath.conf
Dec  3 18:21:12 compute-0 multipathd[328404]: 4352.915800 | path checkers start up
Dec  3 18:21:13 compute-0 python3.9[328592]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:21:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:21:13
Dec  3 18:21:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:21:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:21:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'images', '.mgr', '.rgw.root', 'default.rgw.meta']
Dec  3 18:21:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:21:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:21:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:21:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:21:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:21:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:21:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:21:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:15 compute-0 python3.9[328746]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:21:16 compute-0 python3.9[328909]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 18:21:16 compute-0 systemd[1]: Stopping multipathd container...
Dec  3 18:21:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:16 compute-0 multipathd[328404]: 4356.719972 | exit (signal)
Dec  3 18:21:16 compute-0 multipathd[328404]: 4356.720803 | --------shut down-------
Dec  3 18:21:16 compute-0 systemd[1]: libpod-6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7.scope: Deactivated successfully.
Dec  3 18:21:16 compute-0 conmon[328404]: conmon 6b6179f2bc75659bb207 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7.scope/container/memory.events
Dec  3 18:21:16 compute-0 podman[328913]: 2025-12-03 18:21:16.769044665 +0000 UTC m=+0.473028349 container died 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:21:16 compute-0 systemd[1]: 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7-723de04243caa5b9.timer: Deactivated successfully.
Dec  3 18:21:16 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7.
Dec  3 18:21:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7-userdata-shm.mount: Deactivated successfully.
Dec  3 18:21:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bb2c3fb1874980b91c83b0f82daf5aadc1d518273081a0bb5186f27d6fd47cf-merged.mount: Deactivated successfully.
Dec  3 18:21:16 compute-0 podman[328913]: 2025-12-03 18:21:16.85920588 +0000 UTC m=+0.563189544 container cleanup 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  3 18:21:16 compute-0 podman[328913]: multipathd
Dec  3 18:21:16 compute-0 podman[328939]: multipathd
Dec  3 18:21:16 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  3 18:21:16 compute-0 systemd[1]: Stopped multipathd container.
Dec  3 18:21:17 compute-0 systemd[1]: Starting multipathd container...
Dec  3 18:21:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb2c3fb1874980b91c83b0f82daf5aadc1d518273081a0bb5186f27d6fd47cf/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb2c3fb1874980b91c83b0f82daf5aadc1d518273081a0bb5186f27d6fd47cf/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:17 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7.
Dec  3 18:21:17 compute-0 podman[328952]: 2025-12-03 18:21:17.224289172 +0000 UTC m=+0.200972594 container init 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:21:17 compute-0 multipathd[328967]: + sudo -E kolla_set_configs
Dec  3 18:21:17 compute-0 podman[328952]: 2025-12-03 18:21:17.283254681 +0000 UTC m=+0.259938113 container start 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 18:21:17 compute-0 podman[328952]: multipathd
Dec  3 18:21:17 compute-0 systemd[1]: Started multipathd container.
Dec  3 18:21:17 compute-0 multipathd[328967]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 18:21:17 compute-0 multipathd[328967]: INFO:__main__:Validating config file
Dec  3 18:21:17 compute-0 multipathd[328967]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 18:21:17 compute-0 multipathd[328967]: INFO:__main__:Writing out command to execute
Dec  3 18:21:17 compute-0 multipathd[328967]: ++ cat /run_command
Dec  3 18:21:17 compute-0 multipathd[328967]: + CMD='/usr/sbin/multipathd -d'
Dec  3 18:21:17 compute-0 multipathd[328967]: + ARGS=
Dec  3 18:21:17 compute-0 multipathd[328967]: + sudo kolla_copy_cacerts
Dec  3 18:21:17 compute-0 podman[328974]: 2025-12-03 18:21:17.386946225 +0000 UTC m=+0.099565355 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:21:17 compute-0 multipathd[328967]: + [[ ! -n '' ]]
Dec  3 18:21:17 compute-0 multipathd[328967]: + . kolla_extend_start
Dec  3 18:21:17 compute-0 multipathd[328967]: Running command: '/usr/sbin/multipathd -d'
Dec  3 18:21:17 compute-0 multipathd[328967]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  3 18:21:17 compute-0 multipathd[328967]: + umask 0022
Dec  3 18:21:17 compute-0 multipathd[328967]: + exec /usr/sbin/multipathd -d
Dec  3 18:21:17 compute-0 multipathd[328967]: 4357.405392 | --------start up--------
Dec  3 18:21:17 compute-0 multipathd[328967]: 4357.405431 | read /etc/multipath.conf
Dec  3 18:21:17 compute-0 multipathd[328967]: 4357.415530 | path checkers start up
Dec  3 18:21:17 compute-0 podman[328976]: 2025-12-03 18:21:17.441563069 +0000 UTC m=+0.143846589 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 18:21:17 compute-0 systemd[1]: 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7-5bc7ef68be3a4477.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 18:21:17 compute-0 systemd[1]: 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7-5bc7ef68be3a4477.service: Failed with result 'exit-code'.
Dec  3 18:21:18 compute-0 python3.9[329179]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:19 compute-0 python3.9[329331]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  3 18:21:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:20 compute-0 python3.9[329484]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  3 18:21:20 compute-0 kernel: Key type psk registered
Dec  3 18:21:21 compute-0 python3.9[329646]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:21:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:22 compute-0 python3.9[329769]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764786080.828395-630-123109595357552/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:21:23.307 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:21:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:21:23.308 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:21:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:21:23.308 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:21:23 compute-0 python3.9[329921]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:21:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:21:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:24 compute-0 python3.9[330073]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 18:21:24 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  3 18:21:24 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec  3 18:21:24 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec  3 18:21:24 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec  3 18:21:24 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec  3 18:21:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:25 compute-0 podman[330201]: 2025-12-03 18:21:25.231058233 +0000 UTC m=+0.090265859 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  3 18:21:25 compute-0 python3.9[330246]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 18:21:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:27 compute-0 systemd[1]: Reloading.
Dec  3 18:21:27 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:21:27 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:21:28 compute-0 systemd[1]: Reloading.
Dec  3 18:21:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:28 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:21:28 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:21:28 compute-0 systemd-logind[784]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  3 18:21:28 compute-0 systemd-logind[784]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  3 18:21:28 compute-0 lvm[330363]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 18:21:28 compute-0 lvm[330362]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 18:21:28 compute-0 lvm[330361]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 18:21:28 compute-0 lvm[330362]: VG ceph_vg1 finished
Dec  3 18:21:28 compute-0 lvm[330363]: VG ceph_vg2 finished
Dec  3 18:21:28 compute-0 lvm[330361]: VG ceph_vg0 finished
Dec  3 18:21:29 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 18:21:29 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec  3 18:21:29 compute-0 systemd[1]: Reloading.
Dec  3 18:21:29 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:21:29 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:21:29 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 18:21:29 compute-0 podman[158200]: time="2025-12-03T18:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:21:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38320 "" "Go-http-client/1.1"
Dec  3 18:21:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7697 "" "Go-http-client/1.1"
Dec  3 18:21:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:21:30 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:21:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:21:30 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:21:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:21:30 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:21:30 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev fe0fc825-c5c6-41ad-800a-4803a6bdb139 does not exist
Dec  3 18:21:30 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c76d2641-dee6-4ff7-8772-aa4eeab77f3f does not exist
Dec  3 18:21:30 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c9ca723c-5e52-41db-9b23-72c5f3f5ee22 does not exist
Dec  3 18:21:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:21:30 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:21:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:21:30 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:21:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:21:30 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:21:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:21:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:21:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:21:30 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 18:21:30 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec  3 18:21:30 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.055s CPU time.
Dec  3 18:21:30 compute-0 systemd[1]: run-ra0ae82480a114ba0b4b5fac350edde67.service: Deactivated successfully.
Dec  3 18:21:31 compute-0 python3.9[331923]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 18:21:31 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec  3 18:21:31 compute-0 iscsid[318806]: iscsid shutting down.
Dec  3 18:21:31 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec  3 18:21:31 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec  3 18:21:31 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  3 18:21:31 compute-0 systemd[1]: Starting Open-iSCSI...
Dec  3 18:21:31 compute-0 systemd[1]: Started Open-iSCSI.
Dec  3 18:21:31 compute-0 openstack_network_exporter[160319]: ERROR   18:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:21:31 compute-0 openstack_network_exporter[160319]: ERROR   18:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:21:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:21:31 compute-0 openstack_network_exporter[160319]: ERROR   18:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:21:31 compute-0 openstack_network_exporter[160319]: ERROR   18:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:21:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:21:31 compute-0 openstack_network_exporter[160319]: ERROR   18:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:21:31 compute-0 podman[331981]: 2025-12-03 18:21:31.462278442 +0000 UTC m=+0.077962512 container create 9465da3187be7d0b2c71d2331cc2249996e5e799c0050e107af3dcd49401cc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:21:31 compute-0 podman[331981]: 2025-12-03 18:21:31.410636579 +0000 UTC m=+0.026320639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:21:31 compute-0 systemd[1]: Started libpod-conmon-9465da3187be7d0b2c71d2331cc2249996e5e799c0050e107af3dcd49401cc7c.scope.
Dec  3 18:21:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:21:31 compute-0 podman[331981]: 2025-12-03 18:21:31.586707158 +0000 UTC m=+0.202391218 container init 9465da3187be7d0b2c71d2331cc2249996e5e799c0050e107af3dcd49401cc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:21:31 compute-0 podman[331981]: 2025-12-03 18:21:31.602701176 +0000 UTC m=+0.218385216 container start 9465da3187be7d0b2c71d2331cc2249996e5e799c0050e107af3dcd49401cc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:21:31 compute-0 podman[331981]: 2025-12-03 18:21:31.607102272 +0000 UTC m=+0.222786332 container attach 9465da3187be7d0b2c71d2331cc2249996e5e799c0050e107af3dcd49401cc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:21:31 compute-0 kind_antonelli[332019]: 167 167
Dec  3 18:21:31 compute-0 systemd[1]: libpod-9465da3187be7d0b2c71d2331cc2249996e5e799c0050e107af3dcd49401cc7c.scope: Deactivated successfully.
Dec  3 18:21:31 compute-0 podman[331981]: 2025-12-03 18:21:31.613958389 +0000 UTC m=+0.229642459 container died 9465da3187be7d0b2c71d2331cc2249996e5e799c0050e107af3dcd49401cc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-18e6526c9b8d6ec76fc1a2571c87dcc4f20f27ba14ae22f76ef0fa2f93de3614-merged.mount: Deactivated successfully.
Dec  3 18:21:31 compute-0 podman[331981]: 2025-12-03 18:21:31.685959475 +0000 UTC m=+0.301643525 container remove 9465da3187be7d0b2c71d2331cc2249996e5e799c0050e107af3dcd49401cc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:21:31 compute-0 systemd[1]: libpod-conmon-9465da3187be7d0b2c71d2331cc2249996e5e799c0050e107af3dcd49401cc7c.scope: Deactivated successfully.
Dec  3 18:21:31 compute-0 podman[332096]: 2025-12-03 18:21:31.911040611 +0000 UTC m=+0.082649695 container create 590c9684c33f4a722ed9a664c2f74ddd1234ace454e7604b22bb0e163fc1790c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:21:31 compute-0 systemd[1]: Started libpod-conmon-590c9684c33f4a722ed9a664c2f74ddd1234ace454e7604b22bb0e163fc1790c.scope.
Dec  3 18:21:31 compute-0 podman[332096]: 2025-12-03 18:21:31.8829759 +0000 UTC m=+0.054585074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:21:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf6e39482d66df7006aac0fb9a7bc3639271adc5dbd4e75661e19ca8e461daf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf6e39482d66df7006aac0fb9a7bc3639271adc5dbd4e75661e19ca8e461daf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf6e39482d66df7006aac0fb9a7bc3639271adc5dbd4e75661e19ca8e461daf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf6e39482d66df7006aac0fb9a7bc3639271adc5dbd4e75661e19ca8e461daf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf6e39482d66df7006aac0fb9a7bc3639271adc5dbd4e75661e19ca8e461daf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:32 compute-0 podman[332096]: 2025-12-03 18:21:32.017640015 +0000 UTC m=+0.189249119 container init 590c9684c33f4a722ed9a664c2f74ddd1234ace454e7604b22bb0e163fc1790c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 18:21:32 compute-0 podman[332096]: 2025-12-03 18:21:32.032949716 +0000 UTC m=+0.204558800 container start 590c9684c33f4a722ed9a664c2f74ddd1234ace454e7604b22bb0e163fc1790c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 18:21:32 compute-0 podman[332096]: 2025-12-03 18:21:32.038172083 +0000 UTC m=+0.209781167 container attach 590c9684c33f4a722ed9a664c2f74ddd1234ace454e7604b22bb0e163fc1790c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:21:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:32 compute-0 python3.9[332190]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:21:33 compute-0 practical_colden[332137]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:21:33 compute-0 practical_colden[332137]: --> relative data size: 1.0
Dec  3 18:21:33 compute-0 practical_colden[332137]: --> All data devices are unavailable
Dec  3 18:21:33 compute-0 systemd[1]: libpod-590c9684c33f4a722ed9a664c2f74ddd1234ace454e7604b22bb0e163fc1790c.scope: Deactivated successfully.
Dec  3 18:21:33 compute-0 podman[332096]: 2025-12-03 18:21:33.19998296 +0000 UTC m=+1.371592044 container died 590c9684c33f4a722ed9a664c2f74ddd1234ace454e7604b22bb0e163fc1790c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:21:33 compute-0 systemd[1]: libpod-590c9684c33f4a722ed9a664c2f74ddd1234ace454e7604b22bb0e163fc1790c.scope: Consumed 1.108s CPU time.
Dec  3 18:21:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbf6e39482d66df7006aac0fb9a7bc3639271adc5dbd4e75661e19ca8e461daf-merged.mount: Deactivated successfully.
Dec  3 18:21:33 compute-0 podman[332096]: 2025-12-03 18:21:33.277175181 +0000 UTC m=+1.448784265 container remove 590c9684c33f4a722ed9a664c2f74ddd1234ace454e7604b22bb0e163fc1790c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:21:33 compute-0 systemd[1]: libpod-conmon-590c9684c33f4a722ed9a664c2f74ddd1234ace454e7604b22bb0e163fc1790c.scope: Deactivated successfully.
Dec  3 18:21:33 compute-0 python3.9[332410]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:34 compute-0 podman[332561]: 2025-12-03 18:21:34.187391588 +0000 UTC m=+0.054285088 container create 20dabbd558ad21ff81462e9ba32a21063f12c56b8e1117728e5d1b8579dc7f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 18:21:34 compute-0 systemd[1]: Started libpod-conmon-20dabbd558ad21ff81462e9ba32a21063f12c56b8e1117728e5d1b8579dc7f30.scope.
Dec  3 18:21:34 compute-0 podman[332561]: 2025-12-03 18:21:34.165271651 +0000 UTC m=+0.032165141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:21:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:21:34 compute-0 podman[332561]: 2025-12-03 18:21:34.305118992 +0000 UTC m=+0.172012492 container init 20dabbd558ad21ff81462e9ba32a21063f12c56b8e1117728e5d1b8579dc7f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:21:34 compute-0 podman[332561]: 2025-12-03 18:21:34.320860233 +0000 UTC m=+0.187753713 container start 20dabbd558ad21ff81462e9ba32a21063f12c56b8e1117728e5d1b8579dc7f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:21:34 compute-0 podman[332561]: 2025-12-03 18:21:34.325406464 +0000 UTC m=+0.192299954 container attach 20dabbd558ad21ff81462e9ba32a21063f12c56b8e1117728e5d1b8579dc7f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:21:34 compute-0 wizardly_darwin[332606]: 167 167
Dec  3 18:21:34 compute-0 systemd[1]: libpod-20dabbd558ad21ff81462e9ba32a21063f12c56b8e1117728e5d1b8579dc7f30.scope: Deactivated successfully.
Dec  3 18:21:34 compute-0 podman[332561]: 2025-12-03 18:21:34.329990305 +0000 UTC m=+0.196883795 container died 20dabbd558ad21ff81462e9ba32a21063f12c56b8e1117728e5d1b8579dc7f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:21:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bca0edcf55a0f6223acf26a61269af5d819e26a1388f8e358eb6d826e3dfe12-merged.mount: Deactivated successfully.
Dec  3 18:21:34 compute-0 podman[332561]: 2025-12-03 18:21:34.37309841 +0000 UTC m=+0.239991900 container remove 20dabbd558ad21ff81462e9ba32a21063f12c56b8e1117728e5d1b8579dc7f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:21:34 compute-0 systemd[1]: libpod-conmon-20dabbd558ad21ff81462e9ba32a21063f12c56b8e1117728e5d1b8579dc7f30.scope: Deactivated successfully.
Dec  3 18:21:34 compute-0 podman[332697]: 2025-12-03 18:21:34.55541769 +0000 UTC m=+0.059550475 container create d6a917e33d28993fd67c025c2acc6e99220a788064550192e234452c137a5c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:21:34 compute-0 systemd[1]: Started libpod-conmon-d6a917e33d28993fd67c025c2acc6e99220a788064550192e234452c137a5c07.scope.
Dec  3 18:21:34 compute-0 podman[332697]: 2025-12-03 18:21:34.533899219 +0000 UTC m=+0.038031994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:21:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:21:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37346fc6f490407dfcf6477b11b0d753a007a1817dab3a4af921d41f57b19b50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37346fc6f490407dfcf6477b11b0d753a007a1817dab3a4af921d41f57b19b50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37346fc6f490407dfcf6477b11b0d753a007a1817dab3a4af921d41f57b19b50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37346fc6f490407dfcf6477b11b0d753a007a1817dab3a4af921d41f57b19b50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:34 compute-0 podman[332697]: 2025-12-03 18:21:34.699132065 +0000 UTC m=+0.203264880 container init d6a917e33d28993fd67c025c2acc6e99220a788064550192e234452c137a5c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swartz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:21:34 compute-0 podman[332697]: 2025-12-03 18:21:34.714862506 +0000 UTC m=+0.218995281 container start d6a917e33d28993fd67c025c2acc6e99220a788064550192e234452c137a5c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:21:34 compute-0 podman[332697]: 2025-12-03 18:21:34.718966715 +0000 UTC m=+0.223099510 container attach d6a917e33d28993fd67c025c2acc6e99220a788064550192e234452c137a5c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:21:34 compute-0 python3.9[332712]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 18:21:34 compute-0 systemd[1]: Reloading.
Dec  3 18:21:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:35 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:21:35 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:21:35 compute-0 nifty_swartz[332716]: {
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:    "0": [
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:        {
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "devices": [
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "/dev/loop3"
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            ],
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_name": "ceph_lv0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_size": "21470642176",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "name": "ceph_lv0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "tags": {
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.cluster_name": "ceph",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.crush_device_class": "",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.encrypted": "0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.osd_id": "0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.type": "block",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.vdo": "0"
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            },
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "type": "block",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "vg_name": "ceph_vg0"
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:        }
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:    ],
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:    "1": [
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:        {
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "devices": [
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "/dev/loop4"
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            ],
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_name": "ceph_lv1",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_size": "21470642176",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "name": "ceph_lv1",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "tags": {
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.cluster_name": "ceph",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.crush_device_class": "",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.encrypted": "0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.osd_id": "1",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.type": "block",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.vdo": "0"
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            },
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "type": "block",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "vg_name": "ceph_vg1"
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:        }
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:    ],
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:    "2": [
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:        {
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "devices": [
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "/dev/loop5"
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            ],
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_name": "ceph_lv2",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_size": "21470642176",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "name": "ceph_lv2",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "tags": {
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.cluster_name": "ceph",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.crush_device_class": "",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.encrypted": "0",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.osd_id": "2",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.type": "block",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:                "ceph.vdo": "0"
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            },
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "type": "block",
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:            "vg_name": "ceph_vg2"
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:        }
Dec  3 18:21:35 compute-0 nifty_swartz[332716]:    ]
Dec  3 18:21:35 compute-0 nifty_swartz[332716]: }
Dec  3 18:21:35 compute-0 systemd[1]: libpod-d6a917e33d28993fd67c025c2acc6e99220a788064550192e234452c137a5c07.scope: Deactivated successfully.
Dec  3 18:21:35 compute-0 podman[332697]: 2025-12-03 18:21:35.595030364 +0000 UTC m=+1.099163149 container died d6a917e33d28993fd67c025c2acc6e99220a788064550192e234452c137a5c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:21:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-37346fc6f490407dfcf6477b11b0d753a007a1817dab3a4af921d41f57b19b50-merged.mount: Deactivated successfully.
Dec  3 18:21:35 compute-0 podman[332697]: 2025-12-03 18:21:35.691968694 +0000 UTC m=+1.196101509 container remove d6a917e33d28993fd67c025c2acc6e99220a788064550192e234452c137a5c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swartz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 18:21:35 compute-0 systemd[1]: libpod-conmon-d6a917e33d28993fd67c025c2acc6e99220a788064550192e234452c137a5c07.scope: Deactivated successfully.
Dec  3 18:21:36 compute-0 podman[332970]: 2025-12-03 18:21:36.201034335 +0000 UTC m=+0.120386499 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, release=1755695350)
Dec  3 18:21:36 compute-0 podman[332969]: 2025-12-03 18:21:36.203809413 +0000 UTC m=+0.121215849 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 18:21:36 compute-0 podman[332980]: 2025-12-03 18:21:36.210129836 +0000 UTC m=+0.108883600 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, container_name=kepler, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, version=9.4)
Dec  3 18:21:36 compute-0 podman[332974]: 2025-12-03 18:21:36.226922904 +0000 UTC m=+0.123266650 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:21:36 compute-0 podman[332972]: 2025-12-03 18:21:36.24410815 +0000 UTC m=+0.159719173 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:21:36 compute-0 podman[332973]: 2025-12-03 18:21:36.244993042 +0000 UTC m=+0.152607131 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  3 18:21:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:36 compute-0 python3.9[333088]: ansible-ansible.builtin.service_facts Invoked
Dec  3 18:21:36 compute-0 network[333180]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 18:21:36 compute-0 network[333181]: 'network-scripts' will be removed from distribution in near future.
Dec  3 18:21:36 compute-0 network[333182]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 18:21:36 compute-0 podman[333201]: 2025-12-03 18:21:36.666939371 +0000 UTC m=+0.053326914 container create 35ddd420275f090edd1acb14c049bebc7a1f5b2c2fc35ef99346ab097e8ef692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:21:36 compute-0 podman[333201]: 2025-12-03 18:21:36.64095267 +0000 UTC m=+0.027340233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:21:37 compute-0 systemd[1]: Started libpod-conmon-35ddd420275f090edd1acb14c049bebc7a1f5b2c2fc35ef99346ab097e8ef692.scope.
Dec  3 18:21:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:21:38 compute-0 podman[333201]: 2025-12-03 18:21:38.051253832 +0000 UTC m=+1.437641385 container init 35ddd420275f090edd1acb14c049bebc7a1f5b2c2fc35ef99346ab097e8ef692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:21:38 compute-0 podman[333201]: 2025-12-03 18:21:38.067116256 +0000 UTC m=+1.453503799 container start 35ddd420275f090edd1acb14c049bebc7a1f5b2c2fc35ef99346ab097e8ef692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:21:38 compute-0 podman[333201]: 2025-12-03 18:21:38.07180815 +0000 UTC m=+1.458195763 container attach 35ddd420275f090edd1acb14c049bebc7a1f5b2c2fc35ef99346ab097e8ef692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:21:38 compute-0 elegant_bouman[333218]: 167 167
Dec  3 18:21:38 compute-0 podman[333201]: 2025-12-03 18:21:38.076006171 +0000 UTC m=+1.462393724 container died 35ddd420275f090edd1acb14c049bebc7a1f5b2c2fc35ef99346ab097e8ef692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:21:38 compute-0 systemd[1]: libpod-35ddd420275f090edd1acb14c049bebc7a1f5b2c2fc35ef99346ab097e8ef692.scope: Deactivated successfully.
Dec  3 18:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-02bb0eee4fc6d60f983b1bce5aab5c53ebbf7d40323130133593341271da2677-merged.mount: Deactivated successfully.
Dec  3 18:21:38 compute-0 podman[333201]: 2025-12-03 18:21:38.122439128 +0000 UTC m=+1.508826671 container remove 35ddd420275f090edd1acb14c049bebc7a1f5b2c2fc35ef99346ab097e8ef692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:21:38 compute-0 systemd[1]: libpod-conmon-35ddd420275f090edd1acb14c049bebc7a1f5b2c2fc35ef99346ab097e8ef692.scope: Deactivated successfully.
Dec  3 18:21:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:38 compute-0 podman[333250]: 2025-12-03 18:21:38.360796766 +0000 UTC m=+0.091786826 container create 5bdcafb70a55f31991cd5c47af23ff4a4ee6304e34be71bd92ff0d8ce08c944d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_sutherland, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:21:38 compute-0 podman[333250]: 2025-12-03 18:21:38.329364654 +0000 UTC m=+0.060354704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:21:38 compute-0 systemd[1]: Started libpod-conmon-5bdcafb70a55f31991cd5c47af23ff4a4ee6304e34be71bd92ff0d8ce08c944d.scope.
Dec  3 18:21:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e483d2392f1d71c4095ef9ca4c41cf2aacf690c3458d9945fcef57644886086/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e483d2392f1d71c4095ef9ca4c41cf2aacf690c3458d9945fcef57644886086/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e483d2392f1d71c4095ef9ca4c41cf2aacf690c3458d9945fcef57644886086/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e483d2392f1d71c4095ef9ca4c41cf2aacf690c3458d9945fcef57644886086/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:21:38 compute-0 podman[333250]: 2025-12-03 18:21:38.485213673 +0000 UTC m=+0.216203743 container init 5bdcafb70a55f31991cd5c47af23ff4a4ee6304e34be71bd92ff0d8ce08c944d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:21:38 compute-0 podman[333250]: 2025-12-03 18:21:38.50987774 +0000 UTC m=+0.240867760 container start 5bdcafb70a55f31991cd5c47af23ff4a4ee6304e34be71bd92ff0d8ce08c944d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:21:38 compute-0 podman[333250]: 2025-12-03 18:21:38.514683817 +0000 UTC m=+0.245673927 container attach 5bdcafb70a55f31991cd5c47af23ff4a4ee6304e34be71bd92ff0d8ce08c944d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_sutherland, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:21:39 compute-0 sad_sutherland[333274]: {
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "osd_id": 1,
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "type": "bluestore"
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:    },
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "osd_id": 2,
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "type": "bluestore"
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:    },
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "osd_id": 0,
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:        "type": "bluestore"
Dec  3 18:21:39 compute-0 sad_sutherland[333274]:    }
Dec  3 18:21:39 compute-0 sad_sutherland[333274]: }
Dec  3 18:21:39 compute-0 systemd[1]: libpod-5bdcafb70a55f31991cd5c47af23ff4a4ee6304e34be71bd92ff0d8ce08c944d.scope: Deactivated successfully.
Dec  3 18:21:39 compute-0 systemd[1]: libpod-5bdcafb70a55f31991cd5c47af23ff4a4ee6304e34be71bd92ff0d8ce08c944d.scope: Consumed 1.074s CPU time.
Dec  3 18:21:39 compute-0 podman[333250]: 2025-12-03 18:21:39.581298315 +0000 UTC m=+1.312288335 container died 5bdcafb70a55f31991cd5c47af23ff4a4ee6304e34be71bd92ff0d8ce08c944d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 18:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e483d2392f1d71c4095ef9ca4c41cf2aacf690c3458d9945fcef57644886086-merged.mount: Deactivated successfully.
Dec  3 18:21:39 compute-0 podman[333250]: 2025-12-03 18:21:39.64708727 +0000 UTC m=+1.378077290 container remove 5bdcafb70a55f31991cd5c47af23ff4a4ee6304e34be71bd92ff0d8ce08c944d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 18:21:39 compute-0 systemd[1]: libpod-conmon-5bdcafb70a55f31991cd5c47af23ff4a4ee6304e34be71bd92ff0d8ce08c944d.scope: Deactivated successfully.
Dec  3 18:21:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:21:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:21:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:21:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:21:39 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 88e57051-734b-4328-b2b3-04518659d2a3 does not exist
Dec  3 18:21:39 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev cf5a304f-33ac-4d1d-aec3-37ade30601f4 does not exist
Dec  3 18:21:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:21:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:21:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:42 compute-0 python3.9[333617]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:21:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:21:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:21:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:21:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:21:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:21:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:21:43 compute-0 python3.9[333770]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:21:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:45 compute-0 python3.9[333923]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:21:46 compute-0 python3.9[334076]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:21:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:47 compute-0 python3.9[334229]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:21:47 compute-0 podman[334354]: 2025-12-03 18:21:47.965501731 +0000 UTC m=+0.120637727 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:21:47 compute-0 podman[334347]: 2025-12-03 18:21:47.970735887 +0000 UTC m=+0.125097634 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3)
Dec  3 18:21:48 compute-0 python3.9[334424]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:21:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:49 compute-0 python3.9[334578]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:21:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:50 compute-0 python3.9[334732]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:21:51 compute-0 python3.9[334885]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:52 compute-0 python3.9[335037]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:53 compute-0 python3.9[335189]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:21:55 compute-0 python3.9[335341]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:55 compute-0 podman[335441]: 2025-12-03 18:21:55.947595815 +0000 UTC m=+0.106106444 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  3 18:21:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:57 compute-0 python3.9[335511]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:58 compute-0 python3.9[335663]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:21:59 compute-0 python3.9[335815]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:59 compute-0 podman[158200]: time="2025-12-03T18:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:21:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38319 "" "Go-http-client/1.1"
Dec  3 18:21:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7703 "" "Go-http-client/1.1"
Dec  3 18:21:59 compute-0 python3.9[335967]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:21:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:00 compute-0 python3.9[336119]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:22:01 compute-0 openstack_network_exporter[160319]: ERROR   18:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:22:01 compute-0 openstack_network_exporter[160319]: ERROR   18:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:22:01 compute-0 openstack_network_exporter[160319]: ERROR   18:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:22:01 compute-0 openstack_network_exporter[160319]: ERROR   18:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:22:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:22:01 compute-0 openstack_network_exporter[160319]: ERROR   18:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:22:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:22:01 compute-0 python3.9[336271]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:22:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:02 compute-0 python3.9[336423]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:22:03 compute-0 python3.9[336575]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:22:04 compute-0 python3.9[336727]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:22:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:05 compute-0 python3.9[336879]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:22:05 compute-0 python3.9[337031]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:22:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:06 compute-0 podman[337163]: 2025-12-03 18:22:06.566917141 +0000 UTC m=+0.099749843 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute)
Dec  3 18:22:06 compute-0 podman[337155]: 2025-12-03 18:22:06.577997311 +0000 UTC m=+0.123602515 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  3 18:22:06 compute-0 podman[337156]: 2025-12-03 18:22:06.587108493 +0000 UTC m=+0.115708562 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, io.buildah.version=1.33.7)
Dec  3 18:22:06 compute-0 podman[337164]: 2025-12-03 18:22:06.623144382 +0000 UTC m=+0.129319785 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:22:06 compute-0 podman[337165]: 2025-12-03 18:22:06.626175625 +0000 UTC m=+0.140164768 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4, managed_by=edpm_ansible, container_name=kepler, io.openshift.expose-services=, vcs-type=git)
Dec  3 18:22:06 compute-0 podman[337157]: 2025-12-03 18:22:06.631125527 +0000 UTC m=+0.147197991 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 18:22:06 compute-0 python3.9[337272]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:22:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:08 compute-0 python3.9[337455]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:22:09 compute-0 python3.9[337607]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 18:22:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:09 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec  3 18:22:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:09.982980) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:22:09 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec  3 18:22:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786129983068, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1267, "num_deletes": 506, "total_data_size": 1468062, "memory_usage": 1499080, "flush_reason": "Manual Compaction"}
Dec  3 18:22:09 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec  3 18:22:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786129999264, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1453897, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13549, "largest_seqno": 14815, "table_properties": {"data_size": 1448274, "index_size": 2507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14207, "raw_average_key_size": 17, "raw_value_size": 1435135, "raw_average_value_size": 1807, "num_data_blocks": 115, "num_entries": 794, "num_filter_entries": 794, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764786030, "oldest_key_time": 1764786030, "file_creation_time": 1764786129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 16351 microseconds, and 10263 cpu microseconds.
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:09.999327) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1453897 bytes OK
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:09.999348) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.001158) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.001173) EVENT_LOG_v1 {"time_micros": 1764786130001167, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.001190) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1461279, prev total WAL file size 1461279, number of live WAL files 2.
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.002344) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1419KB)], [32(7401KB)]
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786130002508, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9032708, "oldest_snapshot_seqno": -1}
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3780 keys, 7128314 bytes, temperature: kUnknown
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786130070125, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7128314, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7101201, "index_size": 16571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92704, "raw_average_key_size": 24, "raw_value_size": 7030839, "raw_average_value_size": 1860, "num_data_blocks": 703, "num_entries": 3780, "num_filter_entries": 3780, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764786130, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.070438) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7128314 bytes
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.073062) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.4 rd, 105.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.2 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.1) write-amplify(4.9) OK, records in: 4805, records dropped: 1025 output_compression: NoCompression
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.073093) EVENT_LOG_v1 {"time_micros": 1764786130073079, "job": 14, "event": "compaction_finished", "compaction_time_micros": 67717, "compaction_time_cpu_micros": 42917, "output_level": 6, "num_output_files": 1, "total_output_size": 7128314, "num_input_records": 4805, "num_output_records": 3780, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786130073786, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786130076352, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.001937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.076810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.076818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.076822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.076825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:22:10 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:22:10.076828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:22:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:10 compute-0 python3.9[337759]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 18:22:11 compute-0 systemd[1]: Reloading.
Dec  3 18:22:11 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:22:11 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:22:12 compute-0 python3.9[337948]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:22:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:13 compute-0 python3.9[338101]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:22:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:22:13
Dec  3 18:22:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:22:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:22:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'images']
Dec  3 18:22:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:22:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:22:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:22:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:22:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:22:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:22:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:22:13 compute-0 python3.9[338254]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:22:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:14 compute-0 python3.9[338407]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:22:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:15 compute-0 python3.9[338560]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:22:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:16 compute-0 python3.9[338713]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:22:17 compute-0 python3.9[338866]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:22:18 compute-0 podman[339019]: 2025-12-03 18:22:18.134160614 +0000 UTC m=+0.073992384 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 18:22:18 compute-0 podman[339020]: 2025-12-03 18:22:18.149427247 +0000 UTC m=+0.075756548 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:22:18 compute-0 python3.9[339026]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:22:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:20 compute-0 python3.9[339217]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:21 compute-0 python3.9[339369]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:23 compute-0 python3.9[339521]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:22:23.309 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:22:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:22:23.309 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:22:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:22:23.309 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:22:23 compute-0 python3.9[339673]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:22:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:22:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:24 compute-0 python3.9[339825]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:25 compute-0 python3.9[339977]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:26 compute-0 podman[340101]: 2025-12-03 18:22:26.159658585 +0000 UTC m=+0.071285329 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 18:22:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:22:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3300 writes, 14K keys, 3300 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3300 writes, 3300 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1273 writes, 5781 keys, 1273 commit groups, 1.0 writes per commit group, ingest: 8.44 MB, 0.01 MB/s#012Interval WAL: 1274 writes, 1274 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     87.1      0.18              0.08         7    0.025       0      0       0.0       0.0#012  L6      1/0    6.80 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     82.4     68.1      0.60              0.18         6    0.099     24K   3201       0.0       0.0#012 Sum      1/0    6.80 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7     63.6     72.4      0.77              0.25        13    0.059     24K   3201       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     96.1     97.1      0.36              0.17         8    0.045     17K   2468       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     82.4     68.1      0.60              0.18         6    0.099     24K   3201       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     90.7      0.17              0.08         6    0.028       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.8 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55911062f1f0#2 capacity: 308.00 MB usage: 1.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(100,1.28 MB,0.415312%) FilterBlock(14,74.92 KB,0.0237552%) IndexBlock(14,148.80 KB,0.0471784%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 18:22:26 compute-0 python3.9[340148]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:27 compute-0 python3.9[340300]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:28 compute-0 python3.9[340452]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:28 compute-0 python3.9[340604]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:29 compute-0 podman[158200]: time="2025-12-03T18:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:22:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38319 "" "Go-http-client/1.1"
Dec  3 18:22:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7699 "" "Go-http-client/1.1"
Dec  3 18:22:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:31 compute-0 openstack_network_exporter[160319]: ERROR   18:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:22:31 compute-0 openstack_network_exporter[160319]: ERROR   18:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:22:31 compute-0 openstack_network_exporter[160319]: ERROR   18:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:22:31 compute-0 openstack_network_exporter[160319]: ERROR   18:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:22:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:22:31 compute-0 openstack_network_exporter[160319]: ERROR   18:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:22:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:22:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:36 compute-0 python3.9[340756]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  3 18:22:36 compute-0 podman[340805]: 2025-12-03 18:22:36.932649269 +0000 UTC m=+0.097216852 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  3 18:22:36 compute-0 podman[340810]: 2025-12-03 18:22:36.949655244 +0000 UTC m=+0.100561314 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:22:36 compute-0 podman[340823]: 2025-12-03 18:22:36.951628151 +0000 UTC m=+0.099688122 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, version=9.4, config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.openshift.tags=base rhel9, container_name=kepler, architecture=x86_64, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 18:22:36 compute-0 podman[340806]: 2025-12-03 18:22:36.960157669 +0000 UTC m=+0.124720532 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  3 18:22:36 compute-0 podman[340808]: 2025-12-03 18:22:36.981936471 +0000 UTC m=+0.136808798 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 18:22:37 compute-0 podman[340807]: 2025-12-03 18:22:37.017243851 +0000 UTC m=+0.176633348 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 18:22:37 compute-0 python3.9[341026]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 18:22:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:39 compute-0 python3.9[341184]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  3 18:22:39 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:22:39 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:22:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:40 compute-0 systemd-logind[784]: New session 57 of user zuul.
Dec  3 18:22:40 compute-0 systemd[1]: Started Session 57 of User zuul.
Dec  3 18:22:40 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Dec  3 18:22:40 compute-0 systemd-logind[784]: Session 57 logged out. Waiting for processes to exit.
Dec  3 18:22:40 compute-0 systemd-logind[784]: Removed session 57.
Dec  3 18:22:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:22:40 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:22:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:22:40 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:22:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:22:40 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:22:40 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 7e74a9b4-76f9-42a3-9612-f501bb40095b does not exist
Dec  3 18:22:40 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3256e640-67ac-42d8-be18-259348dfc93b does not exist
Dec  3 18:22:40 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b1e4d3d0-40ca-4d9d-b8be-d5e8942046f7 does not exist
Dec  3 18:22:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:22:40 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:22:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:22:40 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:22:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:22:40 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:22:41 compute-0 python3.9[341607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:22:41 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:22:41 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:22:41 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:22:41 compute-0 podman[341638]: 2025-12-03 18:22:41.80505094 +0000 UTC m=+0.055058514 container create ac756ed1cf36c04dfce997c06abf73d30fbe2b0e3b5c5fd60b350305e5218e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 18:22:41 compute-0 systemd[1]: Started libpod-conmon-ac756ed1cf36c04dfce997c06abf73d30fbe2b0e3b5c5fd60b350305e5218e17.scope.
Dec  3 18:22:41 compute-0 podman[341638]: 2025-12-03 18:22:41.783593777 +0000 UTC m=+0.033601371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:22:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:22:41 compute-0 podman[341638]: 2025-12-03 18:22:41.90922616 +0000 UTC m=+0.159233824 container init ac756ed1cf36c04dfce997c06abf73d30fbe2b0e3b5c5fd60b350305e5218e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:22:41 compute-0 podman[341638]: 2025-12-03 18:22:41.920772792 +0000 UTC m=+0.170780366 container start ac756ed1cf36c04dfce997c06abf73d30fbe2b0e3b5c5fd60b350305e5218e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:22:41 compute-0 podman[341638]: 2025-12-03 18:22:41.925744393 +0000 UTC m=+0.175751957 container attach ac756ed1cf36c04dfce997c06abf73d30fbe2b0e3b5c5fd60b350305e5218e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:22:41 compute-0 elastic_jennings[341677]: 167 167
Dec  3 18:22:41 compute-0 systemd[1]: libpod-ac756ed1cf36c04dfce997c06abf73d30fbe2b0e3b5c5fd60b350305e5218e17.scope: Deactivated successfully.
Dec  3 18:22:41 compute-0 podman[341638]: 2025-12-03 18:22:41.928716366 +0000 UTC m=+0.178723940 container died ac756ed1cf36c04dfce997c06abf73d30fbe2b0e3b5c5fd60b350305e5218e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4324f0e10ac0d05e96cf175e19154a41e7b5c9dd74816f73affe59b537813e6a-merged.mount: Deactivated successfully.
Dec  3 18:22:41 compute-0 podman[341638]: 2025-12-03 18:22:41.986144196 +0000 UTC m=+0.236151770 container remove ac756ed1cf36c04dfce997c06abf73d30fbe2b0e3b5c5fd60b350305e5218e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:22:41 compute-0 systemd[1]: libpod-conmon-ac756ed1cf36c04dfce997c06abf73d30fbe2b0e3b5c5fd60b350305e5218e17.scope: Deactivated successfully.
Dec  3 18:22:42 compute-0 podman[341765]: 2025-12-03 18:22:42.190136961 +0000 UTC m=+0.071981047 container create 052ceafe1b2408868d8ffe19872e8e69e3690aafca307bc53f45665af76ba73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_driscoll, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:22:42 compute-0 systemd[1]: Started libpod-conmon-052ceafe1b2408868d8ffe19872e8e69e3690aafca307bc53f45665af76ba73d.scope.
Dec  3 18:22:42 compute-0 podman[341765]: 2025-12-03 18:22:42.169080627 +0000 UTC m=+0.050924743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:22:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec07a0b5b06507b60186bfec9f674f04ab4987cb801ace6b61148c00ecce164/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec07a0b5b06507b60186bfec9f674f04ab4987cb801ace6b61148c00ecce164/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec07a0b5b06507b60186bfec9f674f04ab4987cb801ace6b61148c00ecce164/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec07a0b5b06507b60186bfec9f674f04ab4987cb801ace6b61148c00ecce164/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec07a0b5b06507b60186bfec9f674f04ab4987cb801ace6b61148c00ecce164/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:42 compute-0 podman[341765]: 2025-12-03 18:22:42.292999008 +0000 UTC m=+0.174843114 container init 052ceafe1b2408868d8ffe19872e8e69e3690aafca307bc53f45665af76ba73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_driscoll, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:22:42 compute-0 podman[341765]: 2025-12-03 18:22:42.309203944 +0000 UTC m=+0.191048030 container start 052ceafe1b2408868d8ffe19872e8e69e3690aafca307bc53f45665af76ba73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:22:42 compute-0 podman[341765]: 2025-12-03 18:22:42.316067211 +0000 UTC m=+0.197911307 container attach 052ceafe1b2408868d8ffe19872e8e69e3690aafca307bc53f45665af76ba73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:22:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:42 compute-0 python3.9[341810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764786161.1238132-1249-32732299088010/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:43 compute-0 python3.9[341968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:22:43 compute-0 nostalgic_driscoll[341814]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:22:43 compute-0 nostalgic_driscoll[341814]: --> relative data size: 1.0
Dec  3 18:22:43 compute-0 nostalgic_driscoll[341814]: --> All data devices are unavailable
Dec  3 18:22:43 compute-0 systemd[1]: libpod-052ceafe1b2408868d8ffe19872e8e69e3690aafca307bc53f45665af76ba73d.scope: Deactivated successfully.
Dec  3 18:22:43 compute-0 systemd[1]: libpod-052ceafe1b2408868d8ffe19872e8e69e3690aafca307bc53f45665af76ba73d.scope: Consumed 1.082s CPU time.
Dec  3 18:22:43 compute-0 podman[341765]: 2025-12-03 18:22:43.455661039 +0000 UTC m=+1.337505125 container died 052ceafe1b2408868d8ffe19872e8e69e3690aafca307bc53f45665af76ba73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_driscoll, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:22:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:22:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:22:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:22:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:22:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:22:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:22:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dec07a0b5b06507b60186bfec9f674f04ab4987cb801ace6b61148c00ecce164-merged.mount: Deactivated successfully.
Dec  3 18:22:44 compute-0 podman[341765]: 2025-12-03 18:22:44.162542376 +0000 UTC m=+2.044386462 container remove 052ceafe1b2408868d8ffe19872e8e69e3690aafca307bc53f45665af76ba73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_driscoll, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:22:44 compute-0 systemd[1]: libpod-conmon-052ceafe1b2408868d8ffe19872e8e69e3690aafca307bc53f45665af76ba73d.scope: Deactivated successfully.
Dec  3 18:22:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:44 compute-0 python3.9[342104]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:44 compute-0 podman[342321]: 2025-12-03 18:22:44.988283682 +0000 UTC m=+0.054034209 container create 366e46a91d999823bc783a3c113496c5c9f482e46907bedc18a0bf55e3ae7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:22:45 compute-0 podman[342321]: 2025-12-03 18:22:44.968359146 +0000 UTC m=+0.034109693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:22:45 compute-0 python3.9[342384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:22:45 compute-0 systemd[1]: Started libpod-conmon-366e46a91d999823bc783a3c113496c5c9f482e46907bedc18a0bf55e3ae7d99.scope.
Dec  3 18:22:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:22:45 compute-0 podman[342321]: 2025-12-03 18:22:45.522841997 +0000 UTC m=+0.588592544 container init 366e46a91d999823bc783a3c113496c5c9f482e46907bedc18a0bf55e3ae7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_albattani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:22:45 compute-0 podman[342321]: 2025-12-03 18:22:45.53158802 +0000 UTC m=+0.597338547 container start 366e46a91d999823bc783a3c113496c5c9f482e46907bedc18a0bf55e3ae7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 18:22:45 compute-0 heuristic_albattani[342431]: 167 167
Dec  3 18:22:45 compute-0 systemd[1]: libpod-366e46a91d999823bc783a3c113496c5c9f482e46907bedc18a0bf55e3ae7d99.scope: Deactivated successfully.
Dec  3 18:22:45 compute-0 podman[342321]: 2025-12-03 18:22:45.767928893 +0000 UTC m=+0.833679440 container attach 366e46a91d999823bc783a3c113496c5c9f482e46907bedc18a0bf55e3ae7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:22:45 compute-0 podman[342321]: 2025-12-03 18:22:45.768604279 +0000 UTC m=+0.834354836 container died 366e46a91d999823bc783a3c113496c5c9f482e46907bedc18a0bf55e3ae7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e28944f02789f80831669085f7b4399d527536441601e58bdc705349d47c82ad-merged.mount: Deactivated successfully.
Dec  3 18:22:45 compute-0 python3.9[342522]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764786164.712383-1249-16376866343465/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:45 compute-0 podman[342321]: 2025-12-03 18:22:45.970022011 +0000 UTC m=+1.035772538 container remove 366e46a91d999823bc783a3c113496c5c9f482e46907bedc18a0bf55e3ae7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 18:22:46 compute-0 systemd[1]: libpod-conmon-366e46a91d999823bc783a3c113496c5c9f482e46907bedc18a0bf55e3ae7d99.scope: Deactivated successfully.
Dec  3 18:22:46 compute-0 podman[342531]: 2025-12-03 18:22:46.130675869 +0000 UTC m=+0.031378966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:22:46 compute-0 podman[342531]: 2025-12-03 18:22:46.34276865 +0000 UTC m=+0.243471757 container create b9460fecf402ac07a779eae3f0b7ce02c9ed5a2fc9037262881cdef136aeb406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 18:22:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:46 compute-0 systemd[1]: Started libpod-conmon-b9460fecf402ac07a779eae3f0b7ce02c9ed5a2fc9037262881cdef136aeb406.scope.
Dec  3 18:22:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bffe9013fff417b16967866cb11b0f22414f1b9bce1e55bf2e3ee731cff07c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bffe9013fff417b16967866cb11b0f22414f1b9bce1e55bf2e3ee731cff07c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bffe9013fff417b16967866cb11b0f22414f1b9bce1e55bf2e3ee731cff07c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bffe9013fff417b16967866cb11b0f22414f1b9bce1e55bf2e3ee731cff07c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:46 compute-0 podman[342531]: 2025-12-03 18:22:46.514226122 +0000 UTC m=+0.414929199 container init b9460fecf402ac07a779eae3f0b7ce02c9ed5a2fc9037262881cdef136aeb406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:22:46 compute-0 podman[342531]: 2025-12-03 18:22:46.524530222 +0000 UTC m=+0.425233299 container start b9460fecf402ac07a779eae3f0b7ce02c9ed5a2fc9037262881cdef136aeb406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:22:46 compute-0 podman[342531]: 2025-12-03 18:22:46.529693839 +0000 UTC m=+0.430396926 container attach b9460fecf402ac07a779eae3f0b7ce02c9ed5a2fc9037262881cdef136aeb406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:22:47 compute-0 python3.9[342700]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:22:47 compute-0 nice_keldysh[342546]: {
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:    "0": [
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:        {
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "devices": [
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "/dev/loop3"
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            ],
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_name": "ceph_lv0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_size": "21470642176",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "name": "ceph_lv0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "tags": {
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.cluster_name": "ceph",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.crush_device_class": "",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.encrypted": "0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.osd_id": "0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.type": "block",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.vdo": "0"
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            },
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "type": "block",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "vg_name": "ceph_vg0"
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:        }
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:    ],
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:    "1": [
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:        {
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "devices": [
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "/dev/loop4"
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            ],
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_name": "ceph_lv1",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_size": "21470642176",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "name": "ceph_lv1",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "tags": {
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.cluster_name": "ceph",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.crush_device_class": "",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.encrypted": "0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.osd_id": "1",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.type": "block",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.vdo": "0"
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            },
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "type": "block",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "vg_name": "ceph_vg1"
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:        }
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:    ],
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:    "2": [
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:        {
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "devices": [
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "/dev/loop5"
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            ],
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_name": "ceph_lv2",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_size": "21470642176",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "name": "ceph_lv2",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "tags": {
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.cluster_name": "ceph",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.crush_device_class": "",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.encrypted": "0",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.osd_id": "2",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.type": "block",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:                "ceph.vdo": "0"
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            },
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "type": "block",
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:            "vg_name": "ceph_vg2"
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:        }
Dec  3 18:22:47 compute-0 nice_keldysh[342546]:    ]
Dec  3 18:22:47 compute-0 nice_keldysh[342546]: }
Dec  3 18:22:47 compute-0 systemd[1]: libpod-b9460fecf402ac07a779eae3f0b7ce02c9ed5a2fc9037262881cdef136aeb406.scope: Deactivated successfully.
Dec  3 18:22:47 compute-0 podman[342708]: 2025-12-03 18:22:47.413319515 +0000 UTC m=+0.041590785 container died b9460fecf402ac07a779eae3f0b7ce02c9ed5a2fc9037262881cdef136aeb406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-71bffe9013fff417b16967866cb11b0f22414f1b9bce1e55bf2e3ee731cff07c-merged.mount: Deactivated successfully.
Dec  3 18:22:47 compute-0 podman[342708]: 2025-12-03 18:22:47.503431922 +0000 UTC m=+0.131703112 container remove b9460fecf402ac07a779eae3f0b7ce02c9ed5a2fc9037262881cdef136aeb406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keldysh, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 18:22:47 compute-0 systemd[1]: libpod-conmon-b9460fecf402ac07a779eae3f0b7ce02c9ed5a2fc9037262881cdef136aeb406.scope: Deactivated successfully.
Dec  3 18:22:48 compute-0 python3.9[342911]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764786166.6585643-1249-1712554704746/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:48 compute-0 podman[343028]: 2025-12-03 18:22:48.333033442 +0000 UTC m=+0.063217092 container create 814c63e990781d67d669a6689e7b0bd009c7f4292018f6a42287d283d097744d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:22:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:48 compute-0 podman[343028]: 2025-12-03 18:22:48.310127524 +0000 UTC m=+0.040311194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:22:48 compute-0 systemd[1]: Started libpod-conmon-814c63e990781d67d669a6689e7b0bd009c7f4292018f6a42287d283d097744d.scope.
Dec  3 18:22:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:22:48 compute-0 podman[343028]: 2025-12-03 18:22:48.450757453 +0000 UTC m=+0.180941113 container init 814c63e990781d67d669a6689e7b0bd009c7f4292018f6a42287d283d097744d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 18:22:48 compute-0 podman[343028]: 2025-12-03 18:22:48.460962101 +0000 UTC m=+0.191145741 container start 814c63e990781d67d669a6689e7b0bd009c7f4292018f6a42287d283d097744d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:22:48 compute-0 podman[343028]: 2025-12-03 18:22:48.465764189 +0000 UTC m=+0.195947849 container attach 814c63e990781d67d669a6689e7b0bd009c7f4292018f6a42287d283d097744d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 18:22:48 compute-0 stoic_chaplygin[343096]: 167 167
Dec  3 18:22:48 compute-0 systemd[1]: libpod-814c63e990781d67d669a6689e7b0bd009c7f4292018f6a42287d283d097744d.scope: Deactivated successfully.
Dec  3 18:22:48 compute-0 podman[343028]: 2025-12-03 18:22:48.469988352 +0000 UTC m=+0.200171992 container died 814c63e990781d67d669a6689e7b0bd009c7f4292018f6a42287d283d097744d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 18:22:48 compute-0 podman[343070]: 2025-12-03 18:22:48.49203564 +0000 UTC m=+0.105424522 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ebf4feb5d19a4c744c54faaf13d8f032d14b8f2cb62c8530598b9781232d3b0-merged.mount: Deactivated successfully.
Dec  3 18:22:48 compute-0 podman[343028]: 2025-12-03 18:22:48.51503236 +0000 UTC m=+0.245216000 container remove 814c63e990781d67d669a6689e7b0bd009c7f4292018f6a42287d283d097744d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chaplygin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 18:22:48 compute-0 podman[343067]: 2025-12-03 18:22:48.523838505 +0000 UTC m=+0.138688933 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 18:22:48 compute-0 systemd[1]: libpod-conmon-814c63e990781d67d669a6689e7b0bd009c7f4292018f6a42287d283d097744d.scope: Deactivated successfully.
Dec  3 18:22:48 compute-0 podman[343204]: 2025-12-03 18:22:48.71269982 +0000 UTC m=+0.065494158 container create 0aba344bb33274546ad6cbe85267db7902b99c170d94f73a4cc7f68b1be50755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hugle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 18:22:48 compute-0 systemd[1]: Started libpod-conmon-0aba344bb33274546ad6cbe85267db7902b99c170d94f73a4cc7f68b1be50755.scope.
Dec  3 18:22:48 compute-0 podman[343204]: 2025-12-03 18:22:48.684609335 +0000 UTC m=+0.037403733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:22:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/296f8ef6b019bccc66d75aaf6891a9724d8cf623a2578d771e5928cafcff32cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/296f8ef6b019bccc66d75aaf6891a9724d8cf623a2578d771e5928cafcff32cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/296f8ef6b019bccc66d75aaf6891a9724d8cf623a2578d771e5928cafcff32cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/296f8ef6b019bccc66d75aaf6891a9724d8cf623a2578d771e5928cafcff32cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:22:48 compute-0 python3.9[343205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:22:48 compute-0 podman[343204]: 2025-12-03 18:22:48.978354628 +0000 UTC m=+0.331148986 container init 0aba344bb33274546ad6cbe85267db7902b99c170d94f73a4cc7f68b1be50755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hugle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:22:48 compute-0 podman[343204]: 2025-12-03 18:22:48.99770273 +0000 UTC m=+0.350497078 container start 0aba344bb33274546ad6cbe85267db7902b99c170d94f73a4cc7f68b1be50755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hugle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:22:49 compute-0 podman[343204]: 2025-12-03 18:22:49.002597669 +0000 UTC m=+0.355392047 container attach 0aba344bb33274546ad6cbe85267db7902b99c170d94f73a4cc7f68b1be50755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 18:22:49 compute-0 python3.9[343347]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764786168.234592-1249-216723277449038/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:50 compute-0 fervent_hugle[343222]: {
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "osd_id": 1,
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "type": "bluestore"
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:    },
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "osd_id": 2,
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "type": "bluestore"
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:    },
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "osd_id": 0,
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:        "type": "bluestore"
Dec  3 18:22:50 compute-0 fervent_hugle[343222]:    }
Dec  3 18:22:50 compute-0 fervent_hugle[343222]: }
Dec  3 18:22:50 compute-0 podman[343204]: 2025-12-03 18:22:50.092587829 +0000 UTC m=+1.445382167 container died 0aba344bb33274546ad6cbe85267db7902b99c170d94f73a4cc7f68b1be50755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hugle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:22:50 compute-0 systemd[1]: libpod-0aba344bb33274546ad6cbe85267db7902b99c170d94f73a4cc7f68b1be50755.scope: Deactivated successfully.
Dec  3 18:22:50 compute-0 systemd[1]: libpod-0aba344bb33274546ad6cbe85267db7902b99c170d94f73a4cc7f68b1be50755.scope: Consumed 1.095s CPU time.
Dec  3 18:22:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-296f8ef6b019bccc66d75aaf6891a9724d8cf623a2578d771e5928cafcff32cd-merged.mount: Deactivated successfully.
Dec  3 18:22:50 compute-0 podman[343204]: 2025-12-03 18:22:50.343572849 +0000 UTC m=+1.696367197 container remove 0aba344bb33274546ad6cbe85267db7902b99c170d94f73a4cc7f68b1be50755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:22:50 compute-0 systemd[1]: libpod-conmon-0aba344bb33274546ad6cbe85267db7902b99c170d94f73a4cc7f68b1be50755.scope: Deactivated successfully.
Dec  3 18:22:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:22:50 compute-0 python3.9[343536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:22:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:22:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:22:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:22:50 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 68c178cc-d695-493d-8a33-293143ea6bd5 does not exist
Dec  3 18:22:50 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9a5490c5-9f7e-4076-932c-fcb6de2f5fb5 does not exist
Dec  3 18:22:51 compute-0 python3.9[343707]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764786169.8453875-1249-151272693353335/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:22:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:22:52 compute-0 python3.9[343859]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:22:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:53 compute-0 python3.9[344011]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:22:53 compute-0 python3.9[344163]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:22:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:54 compute-0 python3.9[344315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:22:54 compute-0 auditd[702]: Audit daemon rotating log files
Dec  3 18:22:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:22:55 compute-0 python3.9[344438]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764786174.1720324-1356-239412773875904/.source _original_basename=.ngxy5t0c follow=False checksum=73e0eb6f163e6dfbea64dd92f1b20b4e2c239c92 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  3 18:22:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:56 compute-0 podman[344564]: 2025-12-03 18:22:56.687372011 +0000 UTC m=+0.134126542 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  3 18:22:56 compute-0 python3.9[344602]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:22:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:22:58 compute-0 python3.9[344760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:22:59 compute-0 python3.9[344881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764786177.928336-1382-104542674872997/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:22:59 compute-0 podman[158200]: time="2025-12-03T18:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:22:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38319 "" "Go-http-client/1.1"
Dec  3 18:22:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7700 "" "Go-http-client/1.1"
Dec  3 18:22:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:00 compute-0 python3.9[345031]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:23:01 compute-0 openstack_network_exporter[160319]: ERROR   18:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:23:01 compute-0 openstack_network_exporter[160319]: ERROR   18:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:23:01 compute-0 openstack_network_exporter[160319]: ERROR   18:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:23:01 compute-0 openstack_network_exporter[160319]: ERROR   18:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:23:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:23:01 compute-0 openstack_network_exporter[160319]: ERROR   18:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:23:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:23:01 compute-0 python3.9[345152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764786180.1801147-1397-181531613200893/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:23:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:02 compute-0 python3.9[345304]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.704 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.705 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.714 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.716 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.720 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.720 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.720 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.721 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.721 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.721 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'power.state': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'disk.device.allocation': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.722 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.722 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:23:03.728 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:23:03 compute-0 python3.9[345456]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 18:23:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:04 compute-0 python3[345609]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 18:23:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:07 compute-0 podman[345653]: 2025-12-03 18:23:07.975288509 +0000 UTC m=+0.126147037 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public)
Dec  3 18:23:07 compute-0 podman[345644]: 2025-12-03 18:23:07.977611315 +0000 UTC m=+0.137788940 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Dec  3 18:23:07 compute-0 podman[345645]: 2025-12-03 18:23:07.987164378 +0000 UTC m=+0.150211863 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 18:23:07 compute-0 podman[345648]: 2025-12-03 18:23:07.995633815 +0000 UTC m=+0.148539853 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:23:07 compute-0 podman[345647]: 2025-12-03 18:23:07.998506265 +0000 UTC m=+0.148913052 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 18:23:08 compute-0 podman[345646]: 2025-12-03 18:23:08.033688873 +0000 UTC m=+0.193717525 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:23:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:23:13
Dec  3 18:23:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:23:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:23:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'vms', 'volumes']
Dec  3 18:23:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:23:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:23:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:23:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:23:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:23:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:23:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:23:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:22 compute-0 podman[345798]: 2025-12-03 18:23:22.939142508 +0000 UTC m=+4.099436395 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:23:23 compute-0 podman[345797]: 2025-12-03 18:23:23.004582014 +0000 UTC m=+4.155959563 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Dec  3 18:23:23 compute-0 podman[345621]: 2025-12-03 18:23:23.249126987 +0000 UTC m=+18.230314439 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  3 18:23:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:23:23.310 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:23:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:23:23.311 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:23:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:23:23.311 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:23:23 compute-0 podman[345860]: 2025-12-03 18:23:23.472071384 +0000 UTC m=+0.039932156 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  3 18:23:23 compute-0 podman[345860]: 2025-12-03 18:23:23.764614807 +0000 UTC m=+0.332475489 container create 5b32db9cdda1f2abcd87e9305dff47e4bcdbf2233ec1e9af06d8e9fbdfb96395 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 18:23:23 compute-0 python3[345609]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:23:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:23:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:25 compute-0 python3.9[346048]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:23:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:26 compute-0 python3.9[346202]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  3 18:23:26 compute-0 podman[346203]: 2025-12-03 18:23:26.936352428 +0000 UTC m=+0.103252199 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 18:23:27 compute-0 python3.9[346372]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 18:23:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:29 compute-0 podman[158200]: time="2025-12-03T18:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:23:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 40251 "" "Go-http-client/1.1"
Dec  3 18:23:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7704 "" "Go-http-client/1.1"
Dec  3 18:23:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:30 compute-0 python3[346524]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 18:23:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:30 compute-0 podman[346558]: 2025-12-03 18:23:30.321361711 +0000 UTC m=+0.053245999 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  3 18:23:30 compute-0 podman[346558]: 2025-12-03 18:23:30.731578134 +0000 UTC m=+0.463462392 container create 2f6936e5fece2fc435d87492192851f67236800e7e0a2dedc5ef13591d74c1fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, container_name=nova_compute)
Dec  3 18:23:30 compute-0 python3[346524]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  3 18:23:31 compute-0 openstack_network_exporter[160319]: ERROR   18:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:23:31 compute-0 openstack_network_exporter[160319]: ERROR   18:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:23:31 compute-0 openstack_network_exporter[160319]: ERROR   18:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:23:31 compute-0 openstack_network_exporter[160319]: ERROR   18:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:23:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:23:31 compute-0 openstack_network_exporter[160319]: ERROR   18:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:23:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:23:31 compute-0 python3.9[346747]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:23:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:32 compute-0 python3.9[346901]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:23:33 compute-0 python3.9[347052]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764786213.0654655-1489-19227159999742/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:23:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:34 compute-0 python3.9[347128]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 18:23:34 compute-0 systemd[1]: Reloading.
Dec  3 18:23:34 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:23:34 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:23:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:36 compute-0 python3.9[347241]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:23:36 compute-0 systemd[1]: Reloading.
Dec  3 18:23:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:36 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:23:36 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:23:36 compute-0 systemd[1]: Starting nova_compute container...
Dec  3 18:23:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed246234d3cdde5c3ef2b0f1b3c16418dbd68912cd2ca2d1227bca42685e5b02/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed246234d3cdde5c3ef2b0f1b3c16418dbd68912cd2ca2d1227bca42685e5b02/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed246234d3cdde5c3ef2b0f1b3c16418dbd68912cd2ca2d1227bca42685e5b02/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed246234d3cdde5c3ef2b0f1b3c16418dbd68912cd2ca2d1227bca42685e5b02/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed246234d3cdde5c3ef2b0f1b3c16418dbd68912cd2ca2d1227bca42685e5b02/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:37 compute-0 podman[347281]: 2025-12-03 18:23:37.855346324 +0000 UTC m=+1.061387363 container init 2f6936e5fece2fc435d87492192851f67236800e7e0a2dedc5ef13591d74c1fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 18:23:37 compute-0 podman[347281]: 2025-12-03 18:23:37.876841609 +0000 UTC m=+1.082882548 container start 2f6936e5fece2fc435d87492192851f67236800e7e0a2dedc5ef13591d74c1fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=nova_compute)
Dec  3 18:23:37 compute-0 podman[347281]: nova_compute
Dec  3 18:23:37 compute-0 nova_compute[347294]: + sudo -E kolla_set_configs
Dec  3 18:23:37 compute-0 systemd[1]: Started nova_compute container.
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Validating config file
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Copying service configuration files
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Deleting /etc/ceph
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Creating directory /etc/ceph
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /etc/ceph
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 18:23:37 compute-0 nova_compute[347294]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Writing out command to execute
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 18:23:38 compute-0 nova_compute[347294]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 18:23:38 compute-0 nova_compute[347294]: ++ cat /run_command
Dec  3 18:23:38 compute-0 nova_compute[347294]: + CMD=nova-compute
Dec  3 18:23:38 compute-0 nova_compute[347294]: + ARGS=
Dec  3 18:23:38 compute-0 nova_compute[347294]: + sudo kolla_copy_cacerts
Dec  3 18:23:38 compute-0 nova_compute[347294]: + [[ ! -n '' ]]
Dec  3 18:23:38 compute-0 nova_compute[347294]: + . kolla_extend_start
Dec  3 18:23:38 compute-0 nova_compute[347294]: Running command: 'nova-compute'
Dec  3 18:23:38 compute-0 nova_compute[347294]: + echo 'Running command: '\''nova-compute'\'''
Dec  3 18:23:38 compute-0 nova_compute[347294]: + umask 0022
Dec  3 18:23:38 compute-0 nova_compute[347294]: + exec nova-compute
Dec  3 18:23:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:38 compute-0 podman[347446]: 2025-12-03 18:23:38.911086508 +0000 UTC m=+0.109653915 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:23:38 compute-0 podman[347453]: 2025-12-03 18:23:38.924988867 +0000 UTC m=+0.109056270 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:23:38 compute-0 podman[347434]: 2025-12-03 18:23:38.925287855 +0000 UTC m=+0.117983729 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Dec  3 18:23:38 compute-0 podman[347441]: 2025-12-03 18:23:38.929895247 +0000 UTC m=+0.131710663 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:23:38 compute-0 podman[347429]: 2025-12-03 18:23:38.934994922 +0000 UTC m=+0.156517418 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true)
Dec  3 18:23:38 compute-0 podman[347469]: 2025-12-03 18:23:38.950811697 +0000 UTC m=+0.124617540 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, com.redhat.component=ubi9-container)
Dec  3 18:23:39 compute-0 python3.9[347504]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:23:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:39 compute-0 python3.9[347724]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:23:40 compute-0 nova_compute[347294]: 2025-12-03 18:23:40.254 347298 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 18:23:40 compute-0 nova_compute[347294]: 2025-12-03 18:23:40.255 347298 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 18:23:40 compute-0 nova_compute[347294]: 2025-12-03 18:23:40.255 347298 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 18:23:40 compute-0 nova_compute[347294]: 2025-12-03 18:23:40.255 347298 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  3 18:23:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:40 compute-0 nova_compute[347294]: 2025-12-03 18:23:40.410 347298 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:23:40 compute-0 nova_compute[347294]: 2025-12-03 18:23:40.446 347298 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:23:40 compute-0 nova_compute[347294]: 2025-12-03 18:23:40.447 347298 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  3 18:23:40 compute-0 python3.9[347878]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.105 347298 INFO nova.virt.driver [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.240 347298 INFO nova.compute.provider_config [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.255 347298 DEBUG oslo_concurrency.lockutils [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.256 347298 DEBUG oslo_concurrency.lockutils [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.256 347298 DEBUG oslo_concurrency.lockutils [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.256 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.257 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.257 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.257 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.257 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.257 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.258 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.258 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.258 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.258 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.258 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.258 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.258 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.259 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.259 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.259 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.259 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.259 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.260 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.260 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.261 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.261 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.261 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.261 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.261 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.261 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.262 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.262 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.262 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.262 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.262 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.263 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.263 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.263 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.263 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.263 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.263 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.264 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.264 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.264 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.264 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.264 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.264 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.265 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.265 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.265 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.265 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.265 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.265 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.266 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.266 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.266 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.266 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.266 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.266 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.267 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.267 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.267 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.267 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.267 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.267 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.267 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.268 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.268 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.268 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.268 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.268 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.268 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.269 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.269 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.269 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.270 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.270 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.270 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.270 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.270 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.270 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.271 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.271 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.271 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.271 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.272 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.272 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.272 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.272 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.272 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.272 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.273 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.273 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.273 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.273 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.273 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.274 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.274 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.274 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.274 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.274 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.275 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.275 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.275 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.275 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.275 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.275 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.276 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.276 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.276 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.276 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.276 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.277 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.277 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.277 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.277 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.277 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.278 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.278 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.278 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.278 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.278 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.278 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.279 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.279 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.279 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.279 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.279 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.279 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.279 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.280 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.280 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.280 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.280 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.280 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.280 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.281 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.281 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.281 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.281 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.281 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.282 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.282 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.282 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.282 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.282 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.283 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.283 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.283 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.283 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.283 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.284 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.284 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.284 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.284 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.284 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.285 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.285 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.285 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.285 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.285 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.285 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.286 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.286 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.286 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.286 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.286 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.286 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.287 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.287 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.287 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.287 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.287 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.287 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.288 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.288 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.288 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.288 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.288 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.288 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.289 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.289 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.289 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.289 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.289 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.289 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.290 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.290 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.290 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.290 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.290 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.290 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.291 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.291 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.291 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.291 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.292 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.292 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.292 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.292 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.292 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.292 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.293 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.293 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.293 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.293 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.293 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.294 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.294 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.294 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.294 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.294 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.294 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.294 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.295 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.295 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.295 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.295 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.295 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.296 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.296 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.296 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.296 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.296 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.296 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.297 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.297 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.297 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.297 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.297 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.297 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.297 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.298 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.298 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.298 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.298 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.298 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.298 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.299 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.299 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.299 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.299 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.299 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.299 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.300 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.300 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.300 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.300 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.301 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.301 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.301 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.301 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.301 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.301 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.301 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.302 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.302 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.302 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.302 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.302 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.302 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.303 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.303 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.303 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.303 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.303 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.303 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.304 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.304 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.304 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.304 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.304 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.304 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.305 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.305 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.305 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.305 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.305 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.305 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.306 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.306 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.306 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.306 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.306 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.306 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.307 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.307 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.307 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.307 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.307 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.307 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.307 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.308 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.308 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.308 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.308 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.308 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.308 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.309 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.309 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.309 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.309 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.309 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.309 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.310 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.310 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.310 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.310 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.310 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.310 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.311 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.311 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.311 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.311 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.311 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.311 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.312 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.312 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.312 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.312 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.312 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.312 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.313 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.313 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.313 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.313 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.313 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.313 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.313 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.314 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.314 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.314 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.314 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.314 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.314 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.315 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.315 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.315 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.315 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.315 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.315 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.316 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.316 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.316 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.316 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.316 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.316 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.317 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.317 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.317 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.317 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.317 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.317 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.317 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.318 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.318 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.318 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.318 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.318 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.319 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.319 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.319 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.319 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.319 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.319 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.320 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.320 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.320 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.320 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.321 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.321 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.321 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.321 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.321 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.322 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.322 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.322 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.322 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.322 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.322 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.323 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.323 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.323 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.323 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.323 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.323 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.323 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.324 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.324 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.324 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.325 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.325 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.325 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.325 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.325 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.325 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.326 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.326 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.326 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.326 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.326 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.326 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.327 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.327 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.327 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.327 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.327 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.327 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.327 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.328 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.328 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.328 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.328 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.328 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.328 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.329 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.329 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.329 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.329 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.329 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.329 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.330 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.330 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.330 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.330 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.330 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.330 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.331 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.331 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.331 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.331 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.331 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.331 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.332 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.332 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.332 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.332 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.332 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.332 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.333 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.333 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.333 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.333 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.333 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.333 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.334 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.334 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.334 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.334 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.334 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.334 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.335 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.335 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.335 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.335 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.335 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.335 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.336 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.336 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.336 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.336 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.336 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.336 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.337 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.337 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.337 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.337 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.337 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.337 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.338 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.338 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.338 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.338 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.338 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.338 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.339 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.339 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.339 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.339 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.339 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.339 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.339 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.340 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.340 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.340 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.340 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.340 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.341 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.341 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.341 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.341 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.341 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.341 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.341 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.342 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.342 347298 WARNING oslo_config.cfg [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  3 18:23:41 compute-0 nova_compute[347294]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  3 18:23:41 compute-0 nova_compute[347294]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  3 18:23:41 compute-0 nova_compute[347294]: and ``live_migration_inbound_addr`` respectively.
Dec  3 18:23:41 compute-0 nova_compute[347294]: ).  Its value may be silently ignored in the future.#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.342 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.342 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.342 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.343 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.343 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.343 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.343 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.343 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.343 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.344 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.344 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.344 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.344 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.344 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.345 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.345 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.345 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.345 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.345 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.rbd_secret_uuid        = c1caf3ba-b2a5-5005-a11e-e955c344dccc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.346 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.346 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.346 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.346 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.346 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.346 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.347 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.347 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.347 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.347 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.347 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.348 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.348 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.348 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.348 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.348 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.349 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.349 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.349 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.349 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.349 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.349 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.349 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.350 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.350 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.350 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.350 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.350 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.350 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.351 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.351 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.351 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.351 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.351 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.351 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.352 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.352 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.352 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.352 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.352 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.353 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.353 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.353 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.353 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.354 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.354 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.354 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.354 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.354 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.354 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.354 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.355 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.355 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.355 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.355 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.355 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.355 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.356 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.356 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.356 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.356 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.356 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.357 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.357 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.357 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.357 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.357 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.358 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.358 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.358 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.358 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.358 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.359 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.359 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.359 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.359 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.360 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.360 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.360 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.360 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.360 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.360 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.361 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.361 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.361 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.361 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.361 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.361 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.362 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.362 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.362 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.362 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.362 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.362 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.363 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.363 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.363 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.363 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.363 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.363 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.364 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.364 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.364 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.364 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.364 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.365 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.365 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.365 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.365 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.365 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.366 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.366 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.366 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.366 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.366 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.366 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.366 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.367 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.367 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.367 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.367 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.368 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.368 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.368 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.368 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.368 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.369 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.369 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.369 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.369 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.369 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.370 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.370 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.370 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.370 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.370 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.370 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.371 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.371 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.371 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.371 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.371 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.371 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.371 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.372 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.372 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.372 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.372 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.372 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.373 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.373 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.373 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.373 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.373 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.373 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.374 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.374 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.374 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.374 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.374 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.375 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.375 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.375 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.375 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.375 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.376 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.376 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.376 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.376 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.377 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.377 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.377 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.377 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.377 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.377 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.378 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.378 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.378 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.378 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.378 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.379 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.379 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.379 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.379 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.379 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.379 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.380 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.380 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.380 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.380 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.380 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.380 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.381 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.381 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.381 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.381 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.381 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.381 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.382 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.382 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.382 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.382 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.382 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.383 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.383 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.383 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.383 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.383 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.384 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.384 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.384 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.384 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.384 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.384 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.385 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.385 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.385 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.385 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.385 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.386 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.386 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.386 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.386 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.386 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.386 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.387 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.387 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.387 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.387 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.387 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.388 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.388 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.388 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.388 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.388 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.388 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.389 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.389 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.389 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.389 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.389 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.389 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.390 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.390 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.390 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.390 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.390 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.390 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.390 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.391 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.391 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.391 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.391 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.391 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.391 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.391 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.392 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.392 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.392 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.392 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.392 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.392 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.393 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.393 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.393 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.393 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.393 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.393 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.394 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.394 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.394 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.394 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.394 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.394 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.395 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.395 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.395 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.395 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.395 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.395 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.396 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.396 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.396 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.396 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.396 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.397 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.397 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.397 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.397 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.397 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.398 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.398 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.398 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.398 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.398 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.398 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.399 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.399 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.399 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.399 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.399 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.399 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.400 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.400 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.400 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.400 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.400 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.401 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.401 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.401 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.401 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.401 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.402 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.402 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.402 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.402 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.402 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.402 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.403 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.403 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.403 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.403 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.403 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.404 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.404 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.404 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.404 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.404 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.404 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.405 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.405 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.405 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.405 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.405 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.405 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.406 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.406 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.406 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.406 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.406 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.406 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.407 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.407 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.407 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.407 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.407 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.407 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.407 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.408 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.408 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.408 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.408 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.408 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.408 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.409 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.409 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.409 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.409 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.409 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.410 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.410 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.410 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.410 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.410 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.410 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.411 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.411 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.411 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.411 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.411 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.411 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.411 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.412 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.412 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.412 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.412 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.412 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.412 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.413 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.413 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.413 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.413 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.413 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.413 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.414 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.414 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.414 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.414 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.414 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.414 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.414 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.415 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.415 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.415 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.415 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.415 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.415 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.415 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.416 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.416 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.416 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.416 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.416 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.416 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.417 347298 DEBUG oslo_service.service [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.418 347298 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.433 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.434 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.435 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.435 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.456 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f625ac66e20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.461 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f625ac66e20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.463 347298 INFO nova.virt.libvirt.driver [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.482 347298 WARNING nova.virt.libvirt.driver [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  3 18:23:41 compute-0 nova_compute[347294]: 2025-12-03 18:23:41.483 347298 DEBUG nova.virt.libvirt.volume.mount [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  3 18:23:42 compute-0 python3.9[348061]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  3 18:23:42 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:23:42 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:23:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:42 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.527 347298 INFO nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Libvirt host capabilities <capabilities>
Dec  3 18:23:42 compute-0 nova_compute[347294]: 
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <host>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <uuid>3f123a89-727d-4ccf-a960-b8fd98f4d5b8</uuid>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <cpu>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <arch>x86_64</arch>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model>EPYC-Rome-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <vendor>AMD</vendor>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <microcode version='16777317'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <signature family='23' model='49' stepping='0'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='x2apic'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='tsc-deadline'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='osxsave'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='hypervisor'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='tsc_adjust'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='spec-ctrl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='stibp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='arch-capabilities'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='cmp_legacy'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='topoext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='virt-ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='lbrv'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='tsc-scale'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='vmcb-clean'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='pause-filter'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='pfthreshold'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='svme-addr-chk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='rdctl-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='skip-l1dfl-vmentry'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='mds-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature name='pschange-mc-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <pages unit='KiB' size='4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <pages unit='KiB' size='2048'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <pages unit='KiB' size='1048576'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </cpu>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <power_management>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <suspend_mem/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </power_management>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <iommu support='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <migration_features>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <live/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <uri_transports>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <uri_transport>tcp</uri_transport>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <uri_transport>rdma</uri_transport>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </uri_transports>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </migration_features>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <topology>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <cells num='1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <cell id='0'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:          <memory unit='KiB'>7864312</memory>
Dec  3 18:23:42 compute-0 nova_compute[347294]:          <pages unit='KiB' size='4'>1966078</pages>
Dec  3 18:23:42 compute-0 nova_compute[347294]:          <pages unit='KiB' size='2048'>0</pages>
Dec  3 18:23:42 compute-0 nova_compute[347294]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  3 18:23:42 compute-0 nova_compute[347294]:          <distances>
Dec  3 18:23:42 compute-0 nova_compute[347294]:            <sibling id='0' value='10'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:          </distances>
Dec  3 18:23:42 compute-0 nova_compute[347294]:          <cpus num='8'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:          </cpus>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        </cell>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </cells>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </topology>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <cache>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </cache>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <secmodel>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model>selinux</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <doi>0</doi>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </secmodel>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <secmodel>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model>dac</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <doi>0</doi>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </secmodel>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </host>
Dec  3 18:23:42 compute-0 nova_compute[347294]: 
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <guest>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <os_type>hvm</os_type>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <arch name='i686'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <wordsize>32</wordsize>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <domain type='qemu'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <domain type='kvm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </arch>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <features>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <pae/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <nonpae/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <acpi default='on' toggle='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <apic default='on' toggle='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <cpuselection/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <deviceboot/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <disksnapshot default='on' toggle='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <externalSnapshot/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </features>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </guest>
Dec  3 18:23:42 compute-0 nova_compute[347294]: 
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <guest>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <os_type>hvm</os_type>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <arch name='x86_64'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <wordsize>64</wordsize>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <domain type='qemu'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <domain type='kvm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </arch>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <features>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <acpi default='on' toggle='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <apic default='on' toggle='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <cpuselection/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <deviceboot/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <disksnapshot default='on' toggle='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <externalSnapshot/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </features>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </guest>
Dec  3 18:23:42 compute-0 nova_compute[347294]: 
Dec  3 18:23:42 compute-0 nova_compute[347294]: </capabilities>
Dec  3 18:23:42 compute-0 nova_compute[347294]: #033[00m
Dec  3 18:23:42 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.536 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 18:23:42 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.561 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  3 18:23:42 compute-0 nova_compute[347294]: <domainCapabilities>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <domain>kvm</domain>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <arch>i686</arch>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <vcpu max='240'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <iothreads supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <os supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <enum name='firmware'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <loader supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>rom</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pflash</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='readonly'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>yes</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>no</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='secure'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>no</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </loader>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </os>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <cpu>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='host-passthrough' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='hostPassthroughMigratable'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>on</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>off</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='maximum' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='maximumMigratable'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>on</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>off</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='host-model' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <vendor>AMD</vendor>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='x2apic'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='hypervisor'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='stibp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='overflow-recov'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='succor'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='lbrv'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc-scale'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='flushbyasid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='pause-filter'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='pfthreshold'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='disable' name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='custom' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Dhyana-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Genoa'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='auto-ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='auto-ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-128'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-256'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-512'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v6'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v7'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='KnightsMill'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512er'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512pf'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='KnightsMill-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512er'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512pf'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G4-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tbm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G5-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tbm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SierraForest'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cmpccxadd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SierraForest-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cmpccxadd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='athlon'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='athlon-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='core2duo'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='core2duo-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='coreduo'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='coreduo-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='n270'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='n270-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='phenom'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='phenom-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </cpu>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <memoryBacking supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <enum name='sourceType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>file</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>anonymous</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>memfd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </memoryBacking>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <devices>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <disk supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='diskDevice'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>disk</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>cdrom</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>floppy</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>lun</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='bus'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>ide</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>fdc</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>scsi</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>sata</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-non-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </disk>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <graphics supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vnc</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>egl-headless</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>dbus</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </graphics>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <video supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='modelType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vga</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>cirrus</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>none</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>bochs</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>ramfb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </video>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <hostdev supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='mode'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>subsystem</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='startupPolicy'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>default</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>mandatory</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>requisite</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>optional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='subsysType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pci</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>scsi</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='capsType'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='pciBackend'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </hostdev>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <rng supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-non-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>random</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>egd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>builtin</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </rng>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <filesystem supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='driverType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>path</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>handle</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtiofs</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </filesystem>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <tpm supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tpm-tis</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tpm-crb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>emulator</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>external</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendVersion'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>2.0</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </tpm>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <redirdev supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='bus'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </redirdev>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <channel supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pty</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>unix</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </channel>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <crypto supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>qemu</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>builtin</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </crypto>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <interface supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>default</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>passt</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </interface>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <panic supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>isa</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>hyperv</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </panic>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <console supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>null</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vc</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pty</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>dev</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>file</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pipe</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>stdio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>udp</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tcp</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>unix</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>qemu-vdagent</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>dbus</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </console>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </devices>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <features>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <gic supported='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <vmcoreinfo supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <genid supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <backingStoreInput supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <backup supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <async-teardown supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <ps2 supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <sev supported='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <sgx supported='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <hyperv supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='features'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>relaxed</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vapic</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>spinlocks</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vpindex</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>runtime</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>synic</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>stimer</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>reset</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vendor_id</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>frequencies</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>reenlightenment</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tlbflush</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>ipi</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>avic</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>emsr_bitmap</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>xmm_input</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <defaults>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <spinlocks>4095</spinlocks>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <stimer_direct>on</stimer_direct>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </defaults>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </hyperv>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <launchSecurity supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='sectype'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tdx</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </launchSecurity>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </features>
Dec  3 18:23:42 compute-0 nova_compute[347294]: </domainCapabilities>
Dec  3 18:23:42 compute-0 nova_compute[347294]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 18:23:42 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.568 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  3 18:23:42 compute-0 nova_compute[347294]: <domainCapabilities>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <domain>kvm</domain>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <arch>i686</arch>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <vcpu max='4096'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <iothreads supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <os supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <enum name='firmware'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <loader supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>rom</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pflash</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='readonly'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>yes</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>no</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='secure'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>no</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </loader>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </os>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <cpu>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='host-passthrough' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='hostPassthroughMigratable'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>on</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>off</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='maximum' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='maximumMigratable'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>on</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>off</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='host-model' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <vendor>AMD</vendor>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='x2apic'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='hypervisor'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='stibp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='overflow-recov'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='succor'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='lbrv'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc-scale'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='flushbyasid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='pause-filter'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='pfthreshold'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='disable' name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='custom' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Dhyana-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Genoa'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='auto-ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='auto-ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-128'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-256'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-512'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v6'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v7'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='KnightsMill'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512er'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512pf'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='KnightsMill-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512er'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512pf'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G4-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tbm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G5-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tbm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SierraForest'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cmpccxadd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SierraForest-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cmpccxadd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='athlon'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='athlon-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='core2duo'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='core2duo-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='coreduo'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='coreduo-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='n270'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='n270-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='phenom'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='phenom-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </cpu>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <memoryBacking supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <enum name='sourceType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>file</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>anonymous</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>memfd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </memoryBacking>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <devices>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <disk supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='diskDevice'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>disk</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>cdrom</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>floppy</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>lun</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='bus'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>fdc</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>scsi</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>sata</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-non-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </disk>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <graphics supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vnc</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>egl-headless</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>dbus</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </graphics>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <video supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='modelType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vga</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>cirrus</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>none</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>bochs</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>ramfb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </video>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <hostdev supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='mode'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>subsystem</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='startupPolicy'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>default</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>mandatory</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>requisite</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>optional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='subsysType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pci</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>scsi</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='capsType'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='pciBackend'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </hostdev>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <rng supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-non-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>random</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>egd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>builtin</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </rng>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <filesystem supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='driverType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>path</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>handle</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtiofs</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </filesystem>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <tpm supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tpm-tis</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tpm-crb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>emulator</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>external</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendVersion'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>2.0</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </tpm>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <redirdev supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='bus'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </redirdev>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <channel supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pty</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>unix</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </channel>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <crypto supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>qemu</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>builtin</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </crypto>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <interface supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>default</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>passt</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </interface>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <panic supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>isa</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>hyperv</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </panic>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <console supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>null</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vc</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pty</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>dev</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>file</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pipe</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>stdio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>udp</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tcp</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>unix</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>qemu-vdagent</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>dbus</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </console>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </devices>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <features>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <gic supported='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <vmcoreinfo supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <genid supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <backingStoreInput supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <backup supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <async-teardown supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <ps2 supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <sev supported='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <sgx supported='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <hyperv supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='features'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>relaxed</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vapic</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>spinlocks</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vpindex</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>runtime</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>synic</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>stimer</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>reset</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vendor_id</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>frequencies</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>reenlightenment</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tlbflush</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>ipi</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>avic</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>emsr_bitmap</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>xmm_input</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <defaults>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <spinlocks>4095</spinlocks>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <stimer_direct>on</stimer_direct>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </defaults>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </hyperv>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <launchSecurity supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='sectype'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tdx</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </launchSecurity>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </features>
Dec  3 18:23:42 compute-0 nova_compute[347294]: </domainCapabilities>
Dec  3 18:23:42 compute-0 nova_compute[347294]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 18:23:42 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.624 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 18:23:42 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.628 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  3 18:23:42 compute-0 nova_compute[347294]: <domainCapabilities>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <domain>kvm</domain>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <arch>x86_64</arch>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <vcpu max='4096'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <iothreads supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <os supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <enum name='firmware'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>efi</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <loader supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>rom</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pflash</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='readonly'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>yes</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>no</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='secure'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>yes</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>no</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </loader>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </os>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <cpu>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='host-passthrough' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='hostPassthroughMigratable'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>on</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>off</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='maximum' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='maximumMigratable'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>on</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>off</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='host-model' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <vendor>AMD</vendor>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='x2apic'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='hypervisor'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='stibp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='overflow-recov'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='succor'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='lbrv'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc-scale'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='flushbyasid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='pause-filter'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='pfthreshold'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='disable' name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='custom' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Dhyana-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Genoa'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='auto-ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='auto-ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-128'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-256'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-512'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v6'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v7'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='KnightsMill'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512er'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512pf'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='KnightsMill-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512er'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512pf'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G4-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tbm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G5-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tbm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SierraForest'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cmpccxadd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SierraForest-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cmpccxadd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='athlon'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='athlon-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='core2duo'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='core2duo-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='coreduo'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='coreduo-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='n270'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='n270-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='phenom'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='phenom-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </cpu>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <memoryBacking supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <enum name='sourceType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>file</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>anonymous</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>memfd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </memoryBacking>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <devices>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <disk supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='diskDevice'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>disk</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>cdrom</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>floppy</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>lun</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='bus'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>fdc</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>scsi</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>sata</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-non-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </disk>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <graphics supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vnc</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>egl-headless</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>dbus</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </graphics>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <video supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='modelType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vga</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>cirrus</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>none</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>bochs</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>ramfb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </video>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <hostdev supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='mode'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>subsystem</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='startupPolicy'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>default</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>mandatory</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>requisite</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>optional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='subsysType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pci</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>scsi</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='capsType'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='pciBackend'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </hostdev>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <rng supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtio-non-transitional</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>random</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>egd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>builtin</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </rng>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <filesystem supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='driverType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>path</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>handle</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>virtiofs</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </filesystem>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <tpm supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tpm-tis</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tpm-crb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>emulator</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>external</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendVersion'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>2.0</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </tpm>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <redirdev supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='bus'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </redirdev>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <channel supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pty</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>unix</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </channel>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <crypto supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>qemu</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>builtin</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </crypto>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <interface supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='backendType'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>default</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>passt</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </interface>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <panic supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>isa</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>hyperv</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </panic>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <console supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>null</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vc</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pty</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>dev</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>file</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pipe</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>stdio</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>udp</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tcp</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>unix</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>qemu-vdagent</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>dbus</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </console>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </devices>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <features>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <gic supported='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <vmcoreinfo supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <genid supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <backingStoreInput supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <backup supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <async-teardown supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <ps2 supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <sev supported='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <sgx supported='no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <hyperv supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='features'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>relaxed</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vapic</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>spinlocks</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vpindex</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>runtime</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>synic</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>stimer</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>reset</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>vendor_id</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>frequencies</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>reenlightenment</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tlbflush</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>ipi</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>avic</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>emsr_bitmap</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>xmm_input</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <defaults>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <spinlocks>4095</spinlocks>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <stimer_direct>on</stimer_direct>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </defaults>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </hyperv>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <launchSecurity supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='sectype'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>tdx</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </launchSecurity>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </features>
Dec  3 18:23:42 compute-0 nova_compute[347294]: </domainCapabilities>
Dec  3 18:23:42 compute-0 nova_compute[347294]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 18:23:42 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.751 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  3 18:23:42 compute-0 nova_compute[347294]: <domainCapabilities>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <domain>kvm</domain>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <arch>x86_64</arch>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <vcpu max='240'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <iothreads supported='yes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <os supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <enum name='firmware'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <loader supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>rom</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>pflash</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='readonly'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>yes</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>no</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='secure'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>no</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </loader>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  </os>
Dec  3 18:23:42 compute-0 nova_compute[347294]:  <cpu>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='host-passthrough' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='hostPassthroughMigratable'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>on</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>off</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='maximum' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <enum name='maximumMigratable'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>on</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <value>off</value>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='host-model' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <vendor>AMD</vendor>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='x2apic'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='hypervisor'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='stibp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='overflow-recov'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='succor'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='lbrv'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='tsc-scale'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='flushbyasid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='pause-filter'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='pfthreshold'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <feature policy='disable' name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:42 compute-0 nova_compute[347294]:    <mode name='custom' supported='yes'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Broadwell-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Cooperlake-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Denverton-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Dhyana-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Genoa'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='auto-ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='auto-ibrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Milan-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amd-psfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='stibp-always-on'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-Rome-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='EPYC-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='GraniteRapids-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-128'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-256'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx10-512'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='prefetchiti'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Haswell-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v6'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Icelake-Server-v7'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-IBRS'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='IvyBridge-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='KnightsMill'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512er'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512pf'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='KnightsMill-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512er'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512pf'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G4'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G4-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G5'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tbm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='Opteron_G5-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fma4'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tbm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xop'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v1'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v2'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 18:23:42 compute-0 nova_compute[347294]:      <blockers model='SapphireRapids-v3'>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-int8'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='amx-tile'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-bf16'/>
Dec  3 18:23:42 compute-0 nova_compute[347294]:        <feature name='avx512-fp16'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512bitalg'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512ifma'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vbmi'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vnni'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='fsrc'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='fzrm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='la57'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='taa-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='xfd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='SierraForest'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx-ifma'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='cmpccxadd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='SierraForest-v1'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx-ifma'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx-vnni'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='cmpccxadd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='fbsdp-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='fsrm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='fsrs'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='ibrs-all'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='mcdt-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pbrsb-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='psdp-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='serialize'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='vaes'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v1'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v2'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v3'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Client-v4'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v1'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v2'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='hle'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='rtm'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v3'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v4'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Skylake-Server-v5'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512bw'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512cd'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512dq'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512f'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='avx512vl'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='invpcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pcid'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='pku'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Snowridge'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v1'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='mpx'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v2'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v3'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='core-capability'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='split-lock-detect'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='Snowridge-v4'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='cldemote'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='erms'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='gfni'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdir64b'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='movdiri'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='xsaves'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='athlon'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='athlon-v1'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='core2duo'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='core2duo-v1'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='coreduo'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='coreduo-v1'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='n270'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='n270-v1'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='ss'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='phenom'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <blockers model='phenom-v1'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='3dnow'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <feature name='3dnowext'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </blockers>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </mode>
Dec  3 18:23:43 compute-0 nova_compute[347294]:  </cpu>
Dec  3 18:23:43 compute-0 nova_compute[347294]:  <memoryBacking supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <enum name='sourceType'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <value>file</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <value>anonymous</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <value>memfd</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:  </memoryBacking>
Dec  3 18:23:43 compute-0 nova_compute[347294]:  <devices>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <disk supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='diskDevice'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>disk</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>cdrom</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>floppy</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>lun</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='bus'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>ide</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>fdc</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>scsi</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>sata</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>virtio-transitional</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>virtio-non-transitional</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </disk>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <graphics supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>vnc</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>egl-headless</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>dbus</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </graphics>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <video supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='modelType'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>vga</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>cirrus</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>none</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>bochs</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>ramfb</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </video>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <hostdev supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='mode'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>subsystem</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='startupPolicy'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>default</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>mandatory</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>requisite</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>optional</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='subsysType'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>pci</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>scsi</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='capsType'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='pciBackend'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </hostdev>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <rng supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>virtio</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>virtio-transitional</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>virtio-non-transitional</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>random</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>egd</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>builtin</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </rng>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <filesystem supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='driverType'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>path</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>handle</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>virtiofs</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </filesystem>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <tpm supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>tpm-tis</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>tpm-crb</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>emulator</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>external</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='backendVersion'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>2.0</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </tpm>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <redirdev supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='bus'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>usb</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </redirdev>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <channel supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>pty</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>unix</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </channel>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <crypto supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='model'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>qemu</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='backendModel'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>builtin</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </crypto>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <interface supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='backendType'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>default</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>passt</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </interface>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <panic supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='model'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>isa</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>hyperv</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </panic>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <console supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='type'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>null</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>vc</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>pty</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>dev</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>file</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>pipe</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>stdio</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>udp</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>tcp</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>unix</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>qemu-vdagent</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>dbus</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </console>
Dec  3 18:23:43 compute-0 nova_compute[347294]:  </devices>
Dec  3 18:23:43 compute-0 nova_compute[347294]:  <features>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <gic supported='no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <vmcoreinfo supported='yes'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <genid supported='yes'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <backingStoreInput supported='yes'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <backup supported='yes'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <async-teardown supported='yes'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <ps2 supported='yes'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <sev supported='no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <sgx supported='no'/>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <hyperv supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='features'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>relaxed</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>vapic</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>spinlocks</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>vpindex</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>runtime</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>synic</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>stimer</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>reset</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>vendor_id</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>frequencies</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>reenlightenment</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>tlbflush</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>ipi</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>avic</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>emsr_bitmap</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>xmm_input</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <defaults>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <spinlocks>4095</spinlocks>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <stimer_direct>on</stimer_direct>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </defaults>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </hyperv>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    <launchSecurity supported='yes'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      <enum name='sectype'>
Dec  3 18:23:43 compute-0 nova_compute[347294]:        <value>tdx</value>
Dec  3 18:23:43 compute-0 nova_compute[347294]:      </enum>
Dec  3 18:23:43 compute-0 nova_compute[347294]:    </launchSecurity>
Dec  3 18:23:43 compute-0 nova_compute[347294]:  </features>
Dec  3 18:23:43 compute-0 nova_compute[347294]: </domainCapabilities>
Dec  3 18:23:43 compute-0 nova_compute[347294]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.869 347298 DEBUG nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.870 347298 INFO nova.virt.libvirt.host [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Secure Boot support detected#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.874 347298 INFO nova.virt.libvirt.driver [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.874 347298 INFO nova.virt.libvirt.driver [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:42.896 347298 DEBUG nova.virt.libvirt.driver [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.239 347298 INFO nova.virt.node [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Determined node identity 00cd1895-22aa-49c6-bdb2-0991af662704 from /var/lib/nova/compute_id#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.266 347298 WARNING nova.compute.manager [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Compute nodes ['00cd1895-22aa-49c6-bdb2-0991af662704'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  3 18:23:43 compute-0 python3.9[348244]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.302 347298 INFO nova.compute.manager [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.360 347298 WARNING nova.compute.manager [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.360 347298 DEBUG oslo_concurrency.lockutils [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.361 347298 DEBUG oslo_concurrency.lockutils [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.361 347298 DEBUG oslo_concurrency.lockutils [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.362 347298 DEBUG nova.compute.resource_tracker [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.363 347298 DEBUG oslo_concurrency.processutils [None req-ee04da14-0dfa-4d43-9c74-19eadaa3a4b0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:23:43 compute-0 systemd[1]: Stopping nova_compute container...
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.734 347298 DEBUG oslo_concurrency.lockutils [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.735 347298 DEBUG oslo_concurrency.lockutils [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:23:43 compute-0 nova_compute[347294]: 2025-12-03 18:23:43.735 347298 DEBUG oslo_concurrency.lockutils [None req-39853719-4cca-432a-ac2f-181027573968 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:23:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:23:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:23:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:23:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:23:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:23:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:23:44 compute-0 virtqemud[138705]: End of file while reading data: Input/output error
Dec  3 18:23:44 compute-0 systemd[1]: libpod-2f6936e5fece2fc435d87492192851f67236800e7e0a2dedc5ef13591d74c1fc.scope: Deactivated successfully.
Dec  3 18:23:44 compute-0 systemd[1]: libpod-2f6936e5fece2fc435d87492192851f67236800e7e0a2dedc5ef13591d74c1fc.scope: Consumed 3.758s CPU time.
Dec  3 18:23:44 compute-0 podman[348248]: 2025-12-03 18:23:44.207981302 +0000 UTC m=+0.818625134 container died 2f6936e5fece2fc435d87492192851f67236800e7e0a2dedc5ef13591d74c1fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 18:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed246234d3cdde5c3ef2b0f1b3c16418dbd68912cd2ca2d1227bca42685e5b02-merged.mount: Deactivated successfully.
Dec  3 18:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f6936e5fece2fc435d87492192851f67236800e7e0a2dedc5ef13591d74c1fc-userdata-shm.mount: Deactivated successfully.
Dec  3 18:23:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:46 compute-0 podman[348248]: 2025-12-03 18:23:46.814840729 +0000 UTC m=+3.425484591 container cleanup 2f6936e5fece2fc435d87492192851f67236800e7e0a2dedc5ef13591d74c1fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, container_name=nova_compute)
Dec  3 18:23:46 compute-0 podman[348248]: nova_compute
Dec  3 18:23:46 compute-0 podman[348298]: nova_compute
Dec  3 18:23:46 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  3 18:23:46 compute-0 systemd[1]: Stopped nova_compute container.
Dec  3 18:23:46 compute-0 systemd[1]: edpm_nova_compute.service: Consumed 1.079s CPU time, 18.3M memory peak, read 0B from disk, written 107.5K to disk.
Dec  3 18:23:46 compute-0 systemd[1]: Starting nova_compute container...
Dec  3 18:23:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed246234d3cdde5c3ef2b0f1b3c16418dbd68912cd2ca2d1227bca42685e5b02/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed246234d3cdde5c3ef2b0f1b3c16418dbd68912cd2ca2d1227bca42685e5b02/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed246234d3cdde5c3ef2b0f1b3c16418dbd68912cd2ca2d1227bca42685e5b02/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed246234d3cdde5c3ef2b0f1b3c16418dbd68912cd2ca2d1227bca42685e5b02/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed246234d3cdde5c3ef2b0f1b3c16418dbd68912cd2ca2d1227bca42685e5b02/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:47 compute-0 podman[348309]: 2025-12-03 18:23:47.084893834 +0000 UTC m=+0.141124042 container init 2f6936e5fece2fc435d87492192851f67236800e7e0a2dedc5ef13591d74c1fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125)
Dec  3 18:23:47 compute-0 podman[348309]: 2025-12-03 18:23:47.111817101 +0000 UTC m=+0.168047309 container start 2f6936e5fece2fc435d87492192851f67236800e7e0a2dedc5ef13591d74c1fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 18:23:47 compute-0 podman[348309]: nova_compute
Dec  3 18:23:47 compute-0 systemd[1]: Started nova_compute container.
Dec  3 18:23:47 compute-0 nova_compute[348325]: + sudo -E kolla_set_configs
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Validating config file
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying service configuration files
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Deleting /etc/ceph
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Creating directory /etc/ceph
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /etc/ceph
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Writing out command to execute
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 18:23:47 compute-0 nova_compute[348325]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 18:23:47 compute-0 nova_compute[348325]: ++ cat /run_command
Dec  3 18:23:47 compute-0 nova_compute[348325]: + CMD=nova-compute
Dec  3 18:23:47 compute-0 nova_compute[348325]: + ARGS=
Dec  3 18:23:47 compute-0 nova_compute[348325]: + sudo kolla_copy_cacerts
Dec  3 18:23:47 compute-0 nova_compute[348325]: + [[ ! -n '' ]]
Dec  3 18:23:47 compute-0 nova_compute[348325]: + . kolla_extend_start
Dec  3 18:23:47 compute-0 nova_compute[348325]: Running command: 'nova-compute'
Dec  3 18:23:47 compute-0 nova_compute[348325]: + echo 'Running command: '\''nova-compute'\'''
Dec  3 18:23:47 compute-0 nova_compute[348325]: + umask 0022
Dec  3 18:23:47 compute-0 nova_compute[348325]: + exec nova-compute
Dec  3 18:23:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:48 compute-0 python3.9[348489]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  3 18:23:48 compute-0 systemd[1]: Started libpod-conmon-5b32db9cdda1f2abcd87e9305dff47e4bcdbf2233ec1e9af06d8e9fbdfb96395.scope.
Dec  3 18:23:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce4d4c7cd25a0fd8ae17fdaa12afb31d64b3c41869eb3785cb66e3d3c9a2bd8/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce4d4c7cd25a0fd8ae17fdaa12afb31d64b3c41869eb3785cb66e3d3c9a2bd8/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce4d4c7cd25a0fd8ae17fdaa12afb31d64b3c41869eb3785cb66e3d3c9a2bd8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:48 compute-0 podman[348512]: 2025-12-03 18:23:48.921872407 +0000 UTC m=+0.288607878 container init 5b32db9cdda1f2abcd87e9305dff47e4bcdbf2233ec1e9af06d8e9fbdfb96395 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible)
Dec  3 18:23:48 compute-0 podman[348512]: 2025-12-03 18:23:48.939357354 +0000 UTC m=+0.306092795 container start 5b32db9cdda1f2abcd87e9305dff47e4bcdbf2233ec1e9af06d8e9fbdfb96395 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:23:48 compute-0 python3.9[348489]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Applying nova statedir ownership
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  3 18:23:49 compute-0 nova_compute_init[348534]: INFO:nova_statedir:Nova statedir ownership complete
Dec  3 18:23:49 compute-0 systemd[1]: libpod-5b32db9cdda1f2abcd87e9305dff47e4bcdbf2233ec1e9af06d8e9fbdfb96395.scope: Deactivated successfully.
Dec  3 18:23:49 compute-0 podman[348548]: 2025-12-03 18:23:49.081960972 +0000 UTC m=+0.028740932 container died 5b32db9cdda1f2abcd87e9305dff47e4bcdbf2233ec1e9af06d8e9fbdfb96395 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b32db9cdda1f2abcd87e9305dff47e4bcdbf2233ec1e9af06d8e9fbdfb96395-userdata-shm.mount: Deactivated successfully.
Dec  3 18:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ce4d4c7cd25a0fd8ae17fdaa12afb31d64b3c41869eb3785cb66e3d3c9a2bd8-merged.mount: Deactivated successfully.
Dec  3 18:23:49 compute-0 podman[348548]: 2025-12-03 18:23:49.140362715 +0000 UTC m=+0.087142655 container cleanup 5b32db9cdda1f2abcd87e9305dff47e4bcdbf2233ec1e9af06d8e9fbdfb96395 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:23:49 compute-0 systemd[1]: libpod-conmon-5b32db9cdda1f2abcd87e9305dff47e4bcdbf2233ec1e9af06d8e9fbdfb96395.scope: Deactivated successfully.
Dec  3 18:23:49 compute-0 nova_compute[348325]: 2025-12-03 18:23:49.297 348329 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 18:23:49 compute-0 nova_compute[348325]: 2025-12-03 18:23:49.297 348329 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 18:23:49 compute-0 nova_compute[348325]: 2025-12-03 18:23:49.298 348329 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 18:23:49 compute-0 nova_compute[348325]: 2025-12-03 18:23:49.298 348329 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  3 18:23:49 compute-0 nova_compute[348325]: 2025-12-03 18:23:49.441 348329 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:23:49 compute-0 nova_compute[348325]: 2025-12-03 18:23:49.472 348329 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:23:49 compute-0 nova_compute[348325]: 2025-12-03 18:23:49.473 348329 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.037 348329 INFO nova.virt.driver [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.151 348329 INFO nova.compute.provider_config [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.263 348329 DEBUG oslo_concurrency.lockutils [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.263 348329 DEBUG oslo_concurrency.lockutils [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.263 348329 DEBUG oslo_concurrency.lockutils [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.264 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.265 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.265 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.265 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.265 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.266 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.266 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.267 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.267 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.267 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.267 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.268 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.268 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.268 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.269 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.269 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.269 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.269 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.270 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.270 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.271 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.271 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.272 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.272 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.272 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.273 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.273 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.273 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.274 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.274 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.274 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.275 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.275 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.275 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.276 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.276 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.276 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.277 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.277 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.277 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.278 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.278 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.278 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.279 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.279 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.279 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.279 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.280 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.280 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.280 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.281 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.281 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.281 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.281 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.282 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.282 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.282 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.283 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.283 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.284 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.284 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.284 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.284 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.285 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.285 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.285 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.285 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.286 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.286 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.287 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.287 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.287 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.288 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.288 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.288 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.288 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.289 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.289 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.290 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.290 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.290 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.290 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.291 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.291 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.291 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.292 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.292 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.292 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.293 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.293 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.293 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.294 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.294 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.294 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.294 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.295 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.296 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.296 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.296 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.296 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.296 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.296 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.297 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.297 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.297 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.297 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.297 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.297 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.297 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.298 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.299 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.299 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.299 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.299 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.299 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.300 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.300 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.300 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.300 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.300 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.300 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.300 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.301 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.301 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.301 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.301 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.301 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.301 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.301 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.302 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.302 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.302 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.302 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.302 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.302 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.303 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.303 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.303 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.303 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.303 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.303 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.303 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.304 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.304 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.304 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.304 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.304 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.304 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.304 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.305 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.305 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.305 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.305 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.305 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.305 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.306 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.306 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.306 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.306 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.306 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.306 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.306 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.307 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.307 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.307 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.307 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.307 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.307 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.307 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.307 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.308 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.308 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.308 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.308 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.308 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.308 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.308 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.309 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.309 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.309 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.309 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.309 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.309 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.310 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.310 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.310 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.310 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.310 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.310 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.310 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.311 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.311 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.311 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.311 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.311 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.311 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.311 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.312 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.312 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.312 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.312 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.312 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.312 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.312 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.313 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.313 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.313 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.313 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.313 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.313 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.313 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.314 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.314 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.314 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.314 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.314 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.314 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.314 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.315 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.315 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.315 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.315 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.315 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.315 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.316 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.316 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.316 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.316 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.316 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.316 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.316 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.317 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.317 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.317 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.317 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.317 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.317 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.317 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.318 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.318 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.318 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.318 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.318 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.318 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.318 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.319 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.319 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.319 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.319 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.319 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.319 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.319 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.319 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.320 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.320 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.320 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.320 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.320 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.320 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.321 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.321 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.321 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.321 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.321 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.321 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.321 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.321 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.322 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.322 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.322 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.322 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.322 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.322 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.322 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.323 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.323 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.323 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.323 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.323 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.323 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.323 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.323 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.324 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.324 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.324 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.324 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.324 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.324 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.324 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.325 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.325 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.325 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.325 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.325 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.325 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.325 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.325 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.326 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.326 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.326 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.326 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.326 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.326 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.326 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.327 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.327 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.327 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.327 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.327 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.327 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.327 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.327 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.328 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.328 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.328 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.328 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.328 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.328 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.328 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.329 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.329 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.329 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.329 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.329 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.330 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.330 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.330 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.330 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.330 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.331 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.331 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.331 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.331 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.331 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.332 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.332 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.332 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.332 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.332 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.332 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.333 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.333 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.333 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.333 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.333 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.334 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.334 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.334 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.334 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.334 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.335 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.335 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.335 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.335 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.335 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.336 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.336 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.337 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.337 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.337 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.337 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.338 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.338 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.338 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.338 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.339 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.339 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.339 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.339 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.339 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.340 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.340 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.340 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.340 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.341 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.341 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.341 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.341 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.341 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.342 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.342 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.342 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.342 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.343 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.343 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.343 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.343 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.343 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.344 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.344 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.344 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.344 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.345 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.345 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.345 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.346 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.346 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.346 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.346 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.346 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.347 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.347 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.347 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.347 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.348 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.348 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.348 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.348 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.349 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.349 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.349 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.349 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.349 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.350 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.350 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.350 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.350 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.351 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.351 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.351 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.351 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.352 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.352 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.352 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.352 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.353 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.353 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.353 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.353 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.354 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.354 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.354 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.354 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.355 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.355 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.355 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.355 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.355 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.356 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.356 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.356 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.356 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.357 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.357 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.357 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.357 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.358 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.358 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.358 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.358 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.359 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.359 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.359 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.359 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.359 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.359 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.360 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.360 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.360 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.360 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.360 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.360 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.361 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.361 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.361 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.361 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.361 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.361 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.362 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.362 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.362 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.362 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.362 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.362 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.363 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.363 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.363 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.363 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.363 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.363 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.364 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.364 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.364 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.364 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.364 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.364 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.365 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.365 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.365 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.365 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.365 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.366 348329 WARNING oslo_config.cfg [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  3 18:23:50 compute-0 nova_compute[348325]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  3 18:23:50 compute-0 nova_compute[348325]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  3 18:23:50 compute-0 nova_compute[348325]: and ``live_migration_inbound_addr`` respectively.
Dec  3 18:23:50 compute-0 nova_compute[348325]: ).  Its value may be silently ignored in the future.#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.366 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.366 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.366 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.366 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.367 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.367 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.367 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.367 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.367 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.368 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.368 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.368 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.368 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.368 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.368 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.369 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.369 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.369 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.369 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.rbd_secret_uuid        = c1caf3ba-b2a5-5005-a11e-e955c344dccc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.369 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.369 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.370 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.370 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.370 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.370 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.370 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.370 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.371 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.371 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.371 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.371 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.371 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.372 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.372 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.372 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.372 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.372 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.372 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.372 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.373 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.373 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.373 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.373 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.373 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.373 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.374 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.374 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.374 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.374 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.374 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.375 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.375 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.375 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.375 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.375 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.375 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.375 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.376 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.376 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.376 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.376 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.376 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.376 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.377 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.377 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.377 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.377 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.378 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.378 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.378 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.378 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.378 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.378 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.378 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.379 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.379 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.379 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.380 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.380 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.380 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.380 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.380 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.381 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.381 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.381 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.381 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.381 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.381 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.382 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.382 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.382 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.382 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.382 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.382 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.382 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.383 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.383 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.383 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.383 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.383 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.383 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.384 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.384 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.384 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.384 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.384 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.384 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.384 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.385 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.385 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.385 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.385 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.385 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.385 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.385 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.386 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.386 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.386 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.386 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.386 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.386 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.387 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.387 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.387 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.387 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.387 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.387 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.387 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.388 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.388 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.388 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.388 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.388 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.388 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.389 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.389 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.389 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.389 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.389 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.389 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.390 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.390 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.390 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.390 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.390 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.390 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.391 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.391 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.391 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.391 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.391 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.391 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.392 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.392 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.392 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.392 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.392 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.392 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.392 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.393 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.393 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.393 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.393 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.393 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.394 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.394 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.394 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.394 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.394 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.394 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.394 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.395 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.395 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.395 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.395 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.395 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.395 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.395 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.396 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.396 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.396 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.396 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.396 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.396 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.397 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.397 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.397 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.397 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.397 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.397 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.398 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.398 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.398 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.398 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.398 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.398 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.398 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.399 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.399 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.399 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.399 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.399 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.399 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.400 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.400 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.400 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.400 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.400 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.400 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.401 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.401 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.401 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.401 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.401 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.401 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.401 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.402 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.402 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.402 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.402 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.402 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.402 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.403 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.403 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.403 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.403 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.403 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.403 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.404 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.404 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.404 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.404 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.404 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.404 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.404 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.405 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.405 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.405 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.405 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.405 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.405 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.406 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.406 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.406 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.406 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.406 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.407 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.407 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.407 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.407 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.407 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.408 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.408 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.408 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.408 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.408 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.408 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.408 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.409 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.409 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.409 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.409 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.409 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.410 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.410 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.410 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.410 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.410 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.410 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.411 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.411 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.411 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.411 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.411 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.411 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.412 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.412 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.412 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.412 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.412 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.413 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.413 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.413 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.413 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.413 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.413 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.414 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.414 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.414 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.414 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.414 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.414 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.415 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.415 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.415 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.415 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.415 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.415 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.416 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.416 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.416 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.416 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.416 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.416 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.417 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.417 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.417 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.417 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.417 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.418 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.418 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.418 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.418 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.418 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.418 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.419 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.419 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.419 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.419 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.419 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.420 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.420 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.420 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.420 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.420 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.421 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.421 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.421 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.421 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.421 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.422 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.422 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.422 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.422 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.422 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.422 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.423 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.423 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.423 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.423 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.423 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.424 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.424 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.424 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.424 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.424 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.424 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.425 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.425 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.425 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.425 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.425 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.425 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.425 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.426 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.426 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.426 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.426 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.426 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.426 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.427 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.427 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.427 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.427 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.427 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.427 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.428 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.428 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.428 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.428 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.428 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.428 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.428 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.429 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.429 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.429 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.429 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.429 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.429 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.429 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.430 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.430 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.430 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.430 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.430 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.430 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.431 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.431 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.431 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.431 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.431 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.431 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.431 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.432 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.432 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.432 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.432 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.432 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.432 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.433 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.433 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.433 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.433 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.433 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.433 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.433 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.434 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.434 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.434 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.434 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.434 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.434 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.434 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.435 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.435 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.435 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.435 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.435 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.435 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.436 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.436 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.436 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.436 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.436 348329 DEBUG oslo_service.service [None req-f31f9ffa-20c1-4349-bdd3-8e5a84877b23 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.438 348329 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.458 348329 INFO nova.virt.node [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Determined node identity 00cd1895-22aa-49c6-bdb2-0991af662704 from /var/lib/nova/compute_id#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.459 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.460 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.460 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.460 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.479 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fbb1630ad30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.483 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fbb1630ad30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.484 348329 INFO nova.virt.libvirt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.493 348329 INFO nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Libvirt host capabilities <capabilities>
Dec  3 18:23:50 compute-0 nova_compute[348325]: 
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <host>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <uuid>3f123a89-727d-4ccf-a960-b8fd98f4d5b8</uuid>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <cpu>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <arch>x86_64</arch>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model>EPYC-Rome-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <vendor>AMD</vendor>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <microcode version='16777317'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <signature family='23' model='49' stepping='0'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='x2apic'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='tsc-deadline'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='osxsave'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='hypervisor'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='tsc_adjust'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='spec-ctrl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='stibp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='arch-capabilities'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='cmp_legacy'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='topoext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='virt-ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='lbrv'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='tsc-scale'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='vmcb-clean'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='pause-filter'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='pfthreshold'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='svme-addr-chk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='rdctl-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='skip-l1dfl-vmentry'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='mds-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature name='pschange-mc-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <pages unit='KiB' size='4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <pages unit='KiB' size='2048'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <pages unit='KiB' size='1048576'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </cpu>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <power_management>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <suspend_mem/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </power_management>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <iommu support='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <migration_features>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <live/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <uri_transports>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <uri_transport>tcp</uri_transport>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <uri_transport>rdma</uri_transport>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </uri_transports>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </migration_features>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <topology>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <cells num='1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <cell id='0'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:          <memory unit='KiB'>7864312</memory>
Dec  3 18:23:50 compute-0 nova_compute[348325]:          <pages unit='KiB' size='4'>1966078</pages>
Dec  3 18:23:50 compute-0 nova_compute[348325]:          <pages unit='KiB' size='2048'>0</pages>
Dec  3 18:23:50 compute-0 nova_compute[348325]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  3 18:23:50 compute-0 nova_compute[348325]:          <distances>
Dec  3 18:23:50 compute-0 nova_compute[348325]:            <sibling id='0' value='10'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:          </distances>
Dec  3 18:23:50 compute-0 nova_compute[348325]:          <cpus num='8'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:          </cpus>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        </cell>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </cells>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </topology>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <cache>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </cache>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <secmodel>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model>selinux</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <doi>0</doi>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </secmodel>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <secmodel>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model>dac</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <doi>0</doi>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </secmodel>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </host>
Dec  3 18:23:50 compute-0 nova_compute[348325]: 
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <guest>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <os_type>hvm</os_type>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <arch name='i686'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <wordsize>32</wordsize>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <domain type='qemu'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <domain type='kvm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </arch>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <features>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <pae/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <nonpae/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <acpi default='on' toggle='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <apic default='on' toggle='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <cpuselection/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <deviceboot/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <disksnapshot default='on' toggle='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <externalSnapshot/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </features>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </guest>
Dec  3 18:23:50 compute-0 nova_compute[348325]: 
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <guest>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <os_type>hvm</os_type>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <arch name='x86_64'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <wordsize>64</wordsize>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <domain type='qemu'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <domain type='kvm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </arch>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <features>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <acpi default='on' toggle='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <apic default='on' toggle='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <cpuselection/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <deviceboot/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <disksnapshot default='on' toggle='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <externalSnapshot/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </features>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </guest>
Dec  3 18:23:50 compute-0 nova_compute[348325]: 
Dec  3 18:23:50 compute-0 nova_compute[348325]: </capabilities>
Dec  3 18:23:50 compute-0 nova_compute[348325]: #033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.497 348329 DEBUG nova.virt.libvirt.volume.mount [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.503 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.507 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  3 18:23:50 compute-0 nova_compute[348325]: <domainCapabilities>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <domain>kvm</domain>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <arch>i686</arch>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <vcpu max='240'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <iothreads supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <os supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <enum name='firmware'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <loader supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>rom</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pflash</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='readonly'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>yes</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>no</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='secure'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>no</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </loader>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </os>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <cpu>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='host-passthrough' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='hostPassthroughMigratable'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>on</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>off</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='maximum' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='maximumMigratable'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>on</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>off</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='host-model' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <vendor>AMD</vendor>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='x2apic'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='hypervisor'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='stibp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='overflow-recov'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='succor'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='lbrv'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc-scale'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='flushbyasid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='pause-filter'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='pfthreshold'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='disable' name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='custom' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Dhyana-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Genoa'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='auto-ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='auto-ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-128'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-256'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-512'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 systemd[1]: session-56.scope: Consumed 3min 37.802s CPU time.
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell'>
Dec  3 18:23:50 compute-0 systemd-logind[784]: Session 56 logged out. Waiting for processes to exit.
Dec  3 18:23:50 compute-0 systemd-logind[784]: Removed session 56.
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v6'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v7'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='KnightsMill'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512er'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512pf'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='KnightsMill-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512er'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512pf'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G4-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tbm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G5-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tbm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SierraForest'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cmpccxadd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SierraForest-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cmpccxadd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='athlon'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='athlon-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='core2duo'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='core2duo-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='coreduo'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='coreduo-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='n270'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='n270-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='phenom'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='phenom-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <memoryBacking supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <enum name='sourceType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>file</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>anonymous</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>memfd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </memoryBacking>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <disk supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='diskDevice'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>disk</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>cdrom</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>floppy</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>lun</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='bus'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>ide</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>fdc</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>scsi</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>sata</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-non-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <graphics supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vnc</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>egl-headless</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>dbus</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </graphics>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <video supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='modelType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vga</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>cirrus</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>none</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>bochs</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>ramfb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </video>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <hostdev supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='mode'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>subsystem</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='startupPolicy'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>default</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>mandatory</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>requisite</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>optional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='subsysType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pci</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>scsi</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='capsType'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='pciBackend'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </hostdev>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <rng supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-non-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>random</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>egd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>builtin</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <filesystem supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='driverType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>path</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>handle</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtiofs</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </filesystem>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <tpm supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tpm-tis</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tpm-crb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>emulator</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>external</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendVersion'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>2.0</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </tpm>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <redirdev supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='bus'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </redirdev>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <channel supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pty</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>unix</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </channel>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <crypto supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>qemu</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>builtin</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </crypto>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <interface supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>default</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>passt</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <panic supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>isa</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>hyperv</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </panic>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <console supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>null</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vc</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pty</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>dev</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>file</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pipe</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>stdio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>udp</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tcp</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>unix</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>qemu-vdagent</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>dbus</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </console>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <features>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <gic supported='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <vmcoreinfo supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <genid supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <backingStoreInput supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <backup supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <async-teardown supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <ps2 supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <sev supported='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <sgx supported='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <hyperv supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='features'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>relaxed</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vapic</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>spinlocks</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vpindex</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>runtime</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>synic</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>stimer</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>reset</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vendor_id</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>frequencies</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>reenlightenment</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tlbflush</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>ipi</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>avic</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>emsr_bitmap</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>xmm_input</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <defaults>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <spinlocks>4095</spinlocks>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <stimer_direct>on</stimer_direct>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </defaults>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </hyperv>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <launchSecurity supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='sectype'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tdx</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </launchSecurity>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </features>
Dec  3 18:23:50 compute-0 nova_compute[348325]: </domainCapabilities>
Dec  3 18:23:50 compute-0 nova_compute[348325]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.532 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  3 18:23:50 compute-0 nova_compute[348325]: <domainCapabilities>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <domain>kvm</domain>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <arch>i686</arch>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <vcpu max='4096'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <iothreads supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <os supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <enum name='firmware'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <loader supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>rom</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pflash</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='readonly'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>yes</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>no</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='secure'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>no</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </loader>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </os>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <cpu>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='host-passthrough' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='hostPassthroughMigratable'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>on</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>off</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='maximum' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='maximumMigratable'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>on</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>off</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='host-model' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <vendor>AMD</vendor>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='x2apic'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='hypervisor'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='stibp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='overflow-recov'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='succor'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='lbrv'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc-scale'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='flushbyasid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='pause-filter'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='pfthreshold'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='disable' name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='custom' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Dhyana-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Genoa'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='auto-ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='auto-ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-128'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-256'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-512'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v6'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v7'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='KnightsMill'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512er'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512pf'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='KnightsMill-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512er'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512pf'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G4-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tbm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G5-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tbm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SierraForest'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cmpccxadd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SierraForest-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cmpccxadd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='athlon'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='athlon-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='core2duo'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='core2duo-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='coreduo'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='coreduo-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='n270'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='n270-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='phenom'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='phenom-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <memoryBacking supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <enum name='sourceType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>file</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>anonymous</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>memfd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </memoryBacking>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <disk supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='diskDevice'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>disk</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>cdrom</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>floppy</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>lun</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='bus'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>fdc</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>scsi</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>sata</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-non-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <graphics supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vnc</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>egl-headless</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>dbus</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </graphics>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <video supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='modelType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vga</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>cirrus</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>none</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>bochs</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>ramfb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </video>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <hostdev supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='mode'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>subsystem</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='startupPolicy'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>default</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>mandatory</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>requisite</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>optional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='subsysType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pci</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>scsi</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='capsType'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='pciBackend'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </hostdev>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <rng supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-non-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>random</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>egd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>builtin</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <filesystem supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='driverType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>path</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>handle</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtiofs</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </filesystem>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <tpm supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tpm-tis</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tpm-crb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>emulator</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>external</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendVersion'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>2.0</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </tpm>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <redirdev supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='bus'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </redirdev>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <channel supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pty</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>unix</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </channel>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <crypto supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>qemu</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>builtin</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </crypto>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <interface supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>default</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>passt</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <panic supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>isa</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>hyperv</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </panic>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <console supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>null</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vc</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pty</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>dev</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>file</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pipe</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>stdio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>udp</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tcp</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>unix</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>qemu-vdagent</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>dbus</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </console>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <features>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <gic supported='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <vmcoreinfo supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <genid supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <backingStoreInput supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <backup supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <async-teardown supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <ps2 supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <sev supported='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <sgx supported='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <hyperv supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='features'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>relaxed</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vapic</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>spinlocks</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vpindex</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>runtime</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>synic</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>stimer</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>reset</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vendor_id</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>frequencies</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>reenlightenment</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tlbflush</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>ipi</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>avic</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>emsr_bitmap</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>xmm_input</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <defaults>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <spinlocks>4095</spinlocks>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <stimer_direct>on</stimer_direct>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </defaults>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </hyperv>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <launchSecurity supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='sectype'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tdx</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </launchSecurity>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </features>
Dec  3 18:23:50 compute-0 nova_compute[348325]: </domainCapabilities>
Dec  3 18:23:50 compute-0 nova_compute[348325]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.616 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.623 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  3 18:23:50 compute-0 nova_compute[348325]: <domainCapabilities>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <domain>kvm</domain>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <arch>x86_64</arch>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <vcpu max='240'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <iothreads supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <os supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <enum name='firmware'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <loader supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>rom</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pflash</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='readonly'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>yes</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>no</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='secure'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>no</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </loader>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </os>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <cpu>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='host-passthrough' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='hostPassthroughMigratable'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>on</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>off</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='maximum' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='maximumMigratable'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>on</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>off</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='host-model' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <vendor>AMD</vendor>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='x2apic'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='hypervisor'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='stibp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='overflow-recov'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='succor'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='lbrv'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc-scale'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='flushbyasid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='pause-filter'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='pfthreshold'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='disable' name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='custom' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Dhyana-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Genoa'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='auto-ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='auto-ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-128'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-256'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-512'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v6'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v7'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='KnightsMill'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512er'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512pf'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='KnightsMill-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512er'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512pf'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G4-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tbm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G5-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tbm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SierraForest'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cmpccxadd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SierraForest-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cmpccxadd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='athlon'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='athlon-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='core2duo'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='core2duo-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='coreduo'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='coreduo-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='n270'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='n270-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='phenom'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='phenom-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <memoryBacking supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <enum name='sourceType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>file</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>anonymous</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>memfd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </memoryBacking>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <disk supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='diskDevice'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>disk</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>cdrom</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>floppy</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>lun</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='bus'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>ide</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>fdc</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>scsi</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>sata</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-non-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <graphics supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vnc</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>egl-headless</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>dbus</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </graphics>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <video supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='modelType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vga</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>cirrus</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>none</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>bochs</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>ramfb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </video>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <hostdev supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='mode'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>subsystem</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='startupPolicy'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>default</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>mandatory</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>requisite</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>optional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='subsysType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pci</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>scsi</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='capsType'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='pciBackend'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </hostdev>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <rng supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtio-non-transitional</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>random</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>egd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>builtin</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <filesystem supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='driverType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>path</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>handle</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>virtiofs</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </filesystem>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <tpm supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tpm-tis</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tpm-crb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>emulator</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>external</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendVersion'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>2.0</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </tpm>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <redirdev supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='bus'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </redirdev>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <channel supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pty</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>unix</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </channel>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <crypto supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>qemu</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>builtin</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </crypto>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <interface supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='backendType'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>default</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>passt</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <panic supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>isa</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>hyperv</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </panic>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <console supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>null</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vc</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pty</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>dev</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>file</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pipe</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>stdio</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>udp</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tcp</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>unix</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>qemu-vdagent</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>dbus</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </console>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <features>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <gic supported='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <vmcoreinfo supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <genid supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <backingStoreInput supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <backup supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <async-teardown supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <ps2 supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <sev supported='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <sgx supported='no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <hyperv supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='features'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>relaxed</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vapic</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>spinlocks</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vpindex</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>runtime</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>synic</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>stimer</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>reset</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>vendor_id</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>frequencies</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>reenlightenment</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tlbflush</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>ipi</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>avic</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>emsr_bitmap</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>xmm_input</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <defaults>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <spinlocks>4095</spinlocks>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <stimer_direct>on</stimer_direct>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </defaults>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </hyperv>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <launchSecurity supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='sectype'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>tdx</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </launchSecurity>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </features>
Dec  3 18:23:50 compute-0 nova_compute[348325]: </domainCapabilities>
Dec  3 18:23:50 compute-0 nova_compute[348325]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 18:23:50 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.746 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  3 18:23:50 compute-0 nova_compute[348325]: <domainCapabilities>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <domain>kvm</domain>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <arch>x86_64</arch>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <vcpu max='4096'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <iothreads supported='yes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <os supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <enum name='firmware'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>efi</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <loader supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>rom</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>pflash</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='readonly'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>yes</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>no</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='secure'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>yes</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>no</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </loader>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  </os>
Dec  3 18:23:50 compute-0 nova_compute[348325]:  <cpu>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='host-passthrough' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='hostPassthroughMigratable'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>on</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>off</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='maximum' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <enum name='maximumMigratable'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>on</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <value>off</value>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='host-model' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <vendor>AMD</vendor>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='x2apic'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='hypervisor'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='stibp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='overflow-recov'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='succor'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='lbrv'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='tsc-scale'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='flushbyasid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='pause-filter'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='pfthreshold'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <feature policy='disable' name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:50 compute-0 nova_compute[348325]:    <mode name='custom' supported='yes'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Broadwell-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Cooperlake-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Denverton-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Dhyana-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Genoa'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='auto-ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='auto-ibrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Milan-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amd-psfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='no-nested-data-bp'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='null-sel-clr-base'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='stibp-always-on'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-Rome-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='EPYC-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='GraniteRapids-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-128'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-256'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx10-512'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='prefetchiti'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Haswell-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v6'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Icelake-Server-v7'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='IvyBridge-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='KnightsMill'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512er'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512pf'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='KnightsMill-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4fmaps'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-4vnniw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512er'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512pf'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G4'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G4-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G5'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tbm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Opteron_G5-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fma4'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tbm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xop'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v2'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SapphireRapids-v3'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='amx-tile'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-bf16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-fp16'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512-vpopcntdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bitalg'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vbmi2'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx512vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrc'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fzrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='la57'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='taa-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='tsx-ldtrk'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xfd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SierraForest'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cmpccxadd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='SierraForest-v1'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ifma'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-ne-convert'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='avx-vnni-int8'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='bus-lock-detect'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='cmpccxadd'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fbsdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='fsrs'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='ibrs-all'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='mcdt-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pbrsb-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='psdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='sbdr-ssdp-no'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='serialize'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vaes'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='vpclmulqdq'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:50 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v1'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v2'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v3'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Client-v4'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v1'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v2'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='hle'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='rtm'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v3'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v4'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Skylake-Server-v5'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512bw'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512cd'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512dq'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512f'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='avx512vl'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='invpcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pcid'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='pku'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Snowridge'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v1'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='mpx'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v2'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v3'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='core-capability'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='split-lock-detect'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='Snowridge-v4'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='cldemote'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='erms'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='gfni'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='movdir64b'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='movdiri'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='xsaves'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='athlon'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='athlon-v1'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='core2duo'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='core2duo-v1'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='coreduo'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='coreduo-v1'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='n270'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='n270-v1'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='ss'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='phenom'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <blockers model='phenom-v1'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='3dnow'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <feature name='3dnowext'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </blockers>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </mode>
Dec  3 18:23:51 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:23:51 compute-0 nova_compute[348325]:  <memoryBacking supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <enum name='sourceType'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <value>file</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <value>anonymous</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <value>memfd</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:  </memoryBacking>
Dec  3 18:23:51 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <disk supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='diskDevice'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>disk</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>cdrom</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>floppy</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>lun</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='bus'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>fdc</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>scsi</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>sata</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>virtio-transitional</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>virtio-non-transitional</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <graphics supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>vnc</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>egl-headless</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>dbus</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </graphics>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <video supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='modelType'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>vga</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>cirrus</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>none</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>bochs</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>ramfb</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </video>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <hostdev supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='mode'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>subsystem</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='startupPolicy'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>default</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>mandatory</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>requisite</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>optional</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='subsysType'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>pci</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>scsi</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='capsType'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='pciBackend'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </hostdev>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <rng supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>virtio</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>virtio-transitional</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>virtio-non-transitional</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>random</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>egd</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>builtin</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <filesystem supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='driverType'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>path</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>handle</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>virtiofs</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </filesystem>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <tpm supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>tpm-tis</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>tpm-crb</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>emulator</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>external</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='backendVersion'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>2.0</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </tpm>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <redirdev supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='bus'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>usb</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </redirdev>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <channel supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>pty</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>unix</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </channel>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <crypto supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='model'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>qemu</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='backendModel'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>builtin</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </crypto>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <interface supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='backendType'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>default</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>passt</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <panic supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='model'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>isa</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>hyperv</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </panic>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <console supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='type'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>null</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>vc</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>pty</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>dev</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>file</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>pipe</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>stdio</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>udp</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>tcp</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>unix</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>qemu-vdagent</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>dbus</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </console>
Dec  3 18:23:51 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:23:51 compute-0 nova_compute[348325]:  <features>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <gic supported='no'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <vmcoreinfo supported='yes'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <genid supported='yes'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <backingStoreInput supported='yes'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <backup supported='yes'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <async-teardown supported='yes'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <ps2 supported='yes'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <sev supported='no'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <sgx supported='no'/>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <hyperv supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='features'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>relaxed</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>vapic</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>spinlocks</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>vpindex</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>runtime</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>synic</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>stimer</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>reset</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>vendor_id</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>frequencies</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>reenlightenment</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>tlbflush</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>ipi</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>avic</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>emsr_bitmap</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>xmm_input</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <defaults>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <spinlocks>4095</spinlocks>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <stimer_direct>on</stimer_direct>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </defaults>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </hyperv>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    <launchSecurity supported='yes'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      <enum name='sectype'>
Dec  3 18:23:51 compute-0 nova_compute[348325]:        <value>tdx</value>
Dec  3 18:23:51 compute-0 nova_compute[348325]:      </enum>
Dec  3 18:23:51 compute-0 nova_compute[348325]:    </launchSecurity>
Dec  3 18:23:51 compute-0 nova_compute[348325]:  </features>
Dec  3 18:23:51 compute-0 nova_compute[348325]: </domainCapabilities>
Dec  3 18:23:51 compute-0 nova_compute[348325]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.866 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.866 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.867 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.867 348329 INFO nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Secure Boot support detected#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.870 348329 INFO nova.virt.libvirt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.870 348329 INFO nova.virt.libvirt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.886 348329 DEBUG nova.virt.libvirt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.905 348329 INFO nova.virt.node [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Determined node identity 00cd1895-22aa-49c6-bdb2-0991af662704 from /var/lib/nova/compute_id#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.943 348329 WARNING nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Compute nodes ['00cd1895-22aa-49c6-bdb2-0991af662704'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.981 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.998 348329 WARNING nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.999 348329 DEBUG oslo_concurrency.lockutils [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.999 348329 DEBUG oslo_concurrency.lockutils [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.999 348329 DEBUG oslo_concurrency.lockutils [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:50.999 348329 DEBUG nova.compute.resource_tracker [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:51.000 348329 DEBUG oslo_concurrency.processutils [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:23:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:23:51 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764938889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:23:51 compute-0 nova_compute[348325]: 2025-12-03 18:23:51.459 348329 DEBUG oslo_concurrency.processutils [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:23:51 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec  3 18:23:51 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec  3 18:23:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:23:51 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:23:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:23:51 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:23:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:23:51 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:23:51 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev bb90280b-453d-439b-9c63-f39dc61e8b18 does not exist
Dec  3 18:23:51 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 99257ae9-fc5a-464d-8ea4-6adde605d37e does not exist
Dec  3 18:23:51 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 8a3cb338-4c07-4ba7-b514-a89e56b06008 does not exist
Dec  3 18:23:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:23:51 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:23:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:23:51 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:23:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:23:51 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:23:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:23:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:23:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:23:52 compute-0 nova_compute[348325]: 2025-12-03 18:23:52.056 348329 WARNING nova.virt.libvirt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:23:52 compute-0 nova_compute[348325]: 2025-12-03 18:23:52.057 348329 DEBUG nova.compute.resource_tracker [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4577MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:23:52 compute-0 nova_compute[348325]: 2025-12-03 18:23:52.057 348329 DEBUG oslo_concurrency.lockutils [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:23:52 compute-0 nova_compute[348325]: 2025-12-03 18:23:52.057 348329 DEBUG oslo_concurrency.lockutils [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:23:52 compute-0 nova_compute[348325]: 2025-12-03 18:23:52.080 348329 WARNING nova.compute.resource_tracker [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] No compute node record for compute-0.ctlplane.example.com:00cd1895-22aa-49c6-bdb2-0991af662704: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 00cd1895-22aa-49c6-bdb2-0991af662704 could not be found.#033[00m
Dec  3 18:23:52 compute-0 nova_compute[348325]: 2025-12-03 18:23:52.109 348329 INFO nova.compute.resource_tracker [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 00cd1895-22aa-49c6-bdb2-0991af662704#033[00m
Dec  3 18:23:52 compute-0 nova_compute[348325]: 2025-12-03 18:23:52.188 348329 DEBUG nova.compute.resource_tracker [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:23:52 compute-0 nova_compute[348325]: 2025-12-03 18:23:52.188 348329 DEBUG nova.compute.resource_tracker [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:23:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:52 compute-0 podman[348932]: 2025-12-03 18:23:52.616719474 +0000 UTC m=+0.086073939 container create fec86440fb3e67603eb5893aa6c0a565af1454f4fdb6239a788f208602c05af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 18:23:52 compute-0 podman[348932]: 2025-12-03 18:23:52.585746249 +0000 UTC m=+0.055100794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:23:52 compute-0 systemd[1]: Started libpod-conmon-fec86440fb3e67603eb5893aa6c0a565af1454f4fdb6239a788f208602c05af6.scope.
Dec  3 18:23:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:23:52 compute-0 podman[348932]: 2025-12-03 18:23:52.776258594 +0000 UTC m=+0.245613129 container init fec86440fb3e67603eb5893aa6c0a565af1454f4fdb6239a788f208602c05af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shamir, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:23:52 compute-0 podman[348932]: 2025-12-03 18:23:52.792515711 +0000 UTC m=+0.261870206 container start fec86440fb3e67603eb5893aa6c0a565af1454f4fdb6239a788f208602c05af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shamir, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 18:23:52 compute-0 podman[348932]: 2025-12-03 18:23:52.799212325 +0000 UTC m=+0.268566840 container attach fec86440fb3e67603eb5893aa6c0a565af1454f4fdb6239a788f208602c05af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:23:52 compute-0 modest_shamir[348947]: 167 167
Dec  3 18:23:52 compute-0 systemd[1]: libpod-fec86440fb3e67603eb5893aa6c0a565af1454f4fdb6239a788f208602c05af6.scope: Deactivated successfully.
Dec  3 18:23:52 compute-0 podman[348932]: 2025-12-03 18:23:52.81299396 +0000 UTC m=+0.282348465 container died fec86440fb3e67603eb5893aa6c0a565af1454f4fdb6239a788f208602c05af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 18:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d417a2f18812faf1b4d4a7878e4baa27152ba85fa11dd9cd0572e4b8607dbad6-merged.mount: Deactivated successfully.
Dec  3 18:23:52 compute-0 podman[348932]: 2025-12-03 18:23:52.888927402 +0000 UTC m=+0.358281887 container remove fec86440fb3e67603eb5893aa6c0a565af1454f4fdb6239a788f208602c05af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shamir, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:23:52 compute-0 systemd[1]: libpod-conmon-fec86440fb3e67603eb5893aa6c0a565af1454f4fdb6239a788f208602c05af6.scope: Deactivated successfully.
Dec  3 18:23:53 compute-0 podman[348972]: 2025-12-03 18:23:53.17221265 +0000 UTC m=+0.080365021 container create 9213f62413afa9600f5b5e3d8ca0f5d00d7a2ee73f3d3aa51e20775d56ebcb73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:23:53 compute-0 nova_compute[348325]: 2025-12-03 18:23:53.206 348329 INFO nova.scheduler.client.report [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [req-84520fdb-68c5-4957-9547-7da71939855f] Created resource provider record via placement API for resource provider with UUID 00cd1895-22aa-49c6-bdb2-0991af662704 and name compute-0.ctlplane.example.com.#033[00m
Dec  3 18:23:53 compute-0 podman[348972]: 2025-12-03 18:23:53.138575259 +0000 UTC m=+0.046727620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:23:53 compute-0 systemd[1]: Started libpod-conmon-9213f62413afa9600f5b5e3d8ca0f5d00d7a2ee73f3d3aa51e20775d56ebcb73.scope.
Dec  3 18:23:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7ce47bff24dd88028a5e9ec699b8890607959bc46371396fa436259e350d88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7ce47bff24dd88028a5e9ec699b8890607959bc46371396fa436259e350d88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7ce47bff24dd88028a5e9ec699b8890607959bc46371396fa436259e350d88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7ce47bff24dd88028a5e9ec699b8890607959bc46371396fa436259e350d88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7ce47bff24dd88028a5e9ec699b8890607959bc46371396fa436259e350d88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:53 compute-0 podman[348972]: 2025-12-03 18:23:53.303497741 +0000 UTC m=+0.211650082 container init 9213f62413afa9600f5b5e3d8ca0f5d00d7a2ee73f3d3aa51e20775d56ebcb73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:23:53 compute-0 podman[348972]: 2025-12-03 18:23:53.319178213 +0000 UTC m=+0.227330534 container start 9213f62413afa9600f5b5e3d8ca0f5d00d7a2ee73f3d3aa51e20775d56ebcb73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:23:53 compute-0 podman[348972]: 2025-12-03 18:23:53.323503459 +0000 UTC m=+0.231655790 container attach 9213f62413afa9600f5b5e3d8ca0f5d00d7a2ee73f3d3aa51e20775d56ebcb73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:23:53 compute-0 podman[348990]: 2025-12-03 18:23:53.354824922 +0000 UTC m=+0.089775440 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:23:53 compute-0 podman[348987]: 2025-12-03 18:23:53.358308398 +0000 UTC m=+0.093457300 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 18:23:53 compute-0 nova_compute[348325]: 2025-12-03 18:23:53.672 348329 DEBUG oslo_concurrency.processutils [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:23:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:23:54 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3848104156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.212 348329 DEBUG oslo_concurrency.processutils [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.221 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec  3 18:23:54 compute-0 nova_compute[348325]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.221 348329 INFO nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.222 348329 DEBUG nova.compute.provider_tree [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.222 348329 DEBUG nova.virt.libvirt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.274 348329 DEBUG nova.scheduler.client.report [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Updated inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.275 348329 DEBUG nova.compute.provider_tree [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Updating resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.275 348329 DEBUG nova.compute.provider_tree [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.353 348329 DEBUG nova.compute.provider_tree [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Updating resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.402 348329 DEBUG nova.compute.resource_tracker [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.402 348329 DEBUG oslo_concurrency.lockutils [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.402 348329 DEBUG nova.service [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec  3 18:23:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:54 compute-0 gifted_liskov[348988]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:23:54 compute-0 gifted_liskov[348988]: --> relative data size: 1.0
Dec  3 18:23:54 compute-0 gifted_liskov[348988]: --> All data devices are unavailable
Dec  3 18:23:54 compute-0 systemd[1]: libpod-9213f62413afa9600f5b5e3d8ca0f5d00d7a2ee73f3d3aa51e20775d56ebcb73.scope: Deactivated successfully.
Dec  3 18:23:54 compute-0 podman[348972]: 2025-12-03 18:23:54.490979938 +0000 UTC m=+1.399132269 container died 9213f62413afa9600f5b5e3d8ca0f5d00d7a2ee73f3d3aa51e20775d56ebcb73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:23:54 compute-0 systemd[1]: libpod-9213f62413afa9600f5b5e3d8ca0f5d00d7a2ee73f3d3aa51e20775d56ebcb73.scope: Consumed 1.088s CPU time.
Dec  3 18:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f7ce47bff24dd88028a5e9ec699b8890607959bc46371396fa436259e350d88-merged.mount: Deactivated successfully.
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.533 348329 DEBUG nova.service [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec  3 18:23:54 compute-0 nova_compute[348325]: 2025-12-03 18:23:54.534 348329 DEBUG nova.servicegroup.drivers.db [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec  3 18:23:54 compute-0 podman[348972]: 2025-12-03 18:23:54.558289479 +0000 UTC m=+1.466441800 container remove 9213f62413afa9600f5b5e3d8ca0f5d00d7a2ee73f3d3aa51e20775d56ebcb73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:23:54 compute-0 systemd[1]: libpod-conmon-9213f62413afa9600f5b5e3d8ca0f5d00d7a2ee73f3d3aa51e20775d56ebcb73.scope: Deactivated successfully.
Dec  3 18:23:55 compute-0 podman[349231]: 2025-12-03 18:23:55.429012541 +0000 UTC m=+0.044872795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:23:55 compute-0 podman[349231]: 2025-12-03 18:23:55.939347385 +0000 UTC m=+0.555207559 container create 45072b653c35844b5dd2418f0481925d9dc32383408179540a1bf00d58a00134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:23:56 compute-0 systemd-logind[784]: New session 58 of user zuul.
Dec  3 18:23:56 compute-0 systemd[1]: Started Session 58 of User zuul.
Dec  3 18:23:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:56 compute-0 systemd[1]: Started libpod-conmon-45072b653c35844b5dd2418f0481925d9dc32383408179540a1bf00d58a00134.scope.
Dec  3 18:23:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:23:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:23:57 compute-0 podman[349231]: 2025-12-03 18:23:57.080912101 +0000 UTC m=+1.696772315 container init 45072b653c35844b5dd2418f0481925d9dc32383408179540a1bf00d58a00134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:23:57 compute-0 podman[349231]: 2025-12-03 18:23:57.096503412 +0000 UTC m=+1.712363586 container start 45072b653c35844b5dd2418f0481925d9dc32383408179540a1bf00d58a00134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:23:57 compute-0 ecstatic_black[349303]: 167 167
Dec  3 18:23:57 compute-0 systemd[1]: libpod-45072b653c35844b5dd2418f0481925d9dc32383408179540a1bf00d58a00134.scope: Deactivated successfully.
Dec  3 18:23:57 compute-0 podman[349231]: 2025-12-03 18:23:57.116993721 +0000 UTC m=+1.732853905 container attach 45072b653c35844b5dd2418f0481925d9dc32383408179540a1bf00d58a00134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:23:57 compute-0 podman[349231]: 2025-12-03 18:23:57.118963489 +0000 UTC m=+1.734823663 container died 45072b653c35844b5dd2418f0481925d9dc32383408179540a1bf00d58a00134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:23:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2288e4a75433629c7a3bd21b0336fa37f0b26b74869750446e6d7c7b78564bfa-merged.mount: Deactivated successfully.
Dec  3 18:23:57 compute-0 podman[349231]: 2025-12-03 18:23:57.351328505 +0000 UTC m=+1.967188679 container remove 45072b653c35844b5dd2418f0481925d9dc32383408179540a1bf00d58a00134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:23:57 compute-0 systemd[1]: libpod-conmon-45072b653c35844b5dd2418f0481925d9dc32383408179540a1bf00d58a00134.scope: Deactivated successfully.
Dec  3 18:23:57 compute-0 podman[349333]: 2025-12-03 18:23:57.415674425 +0000 UTC m=+0.270685553 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  3 18:23:57 compute-0 podman[349446]: 2025-12-03 18:23:57.579717154 +0000 UTC m=+0.090211251 container create 0448e314f7d3ef0cbed9a0d5b42eb3ebe1a9a469f578fb1032935e4760958943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 18:23:57 compute-0 podman[349446]: 2025-12-03 18:23:57.529014758 +0000 UTC m=+0.039508875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:23:57 compute-0 systemd[1]: Started libpod-conmon-0448e314f7d3ef0cbed9a0d5b42eb3ebe1a9a469f578fb1032935e4760958943.scope.
Dec  3 18:23:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:23:57 compute-0 python3.9[349440]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28aea0513a91602f37e9264b9090e5fc06e131f6eef744bd9f58d71d66c34f4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28aea0513a91602f37e9264b9090e5fc06e131f6eef744bd9f58d71d66c34f4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28aea0513a91602f37e9264b9090e5fc06e131f6eef744bd9f58d71d66c34f4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28aea0513a91602f37e9264b9090e5fc06e131f6eef744bd9f58d71d66c34f4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:23:57 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:23:57 compute-0 podman[349446]: 2025-12-03 18:23:57.764531671 +0000 UTC m=+0.275025858 container init 0448e314f7d3ef0cbed9a0d5b42eb3ebe1a9a469f578fb1032935e4760958943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:23:57 compute-0 podman[349446]: 2025-12-03 18:23:57.79197507 +0000 UTC m=+0.302469207 container start 0448e314f7d3ef0cbed9a0d5b42eb3ebe1a9a469f578fb1032935e4760958943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:23:57 compute-0 podman[349446]: 2025-12-03 18:23:57.797491854 +0000 UTC m=+0.307985971 container attach 0448e314f7d3ef0cbed9a0d5b42eb3ebe1a9a469f578fb1032935e4760958943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 18:23:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]: {
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:    "0": [
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:        {
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "devices": [
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "/dev/loop3"
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            ],
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_name": "ceph_lv0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_size": "21470642176",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "name": "ceph_lv0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "tags": {
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.cluster_name": "ceph",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.crush_device_class": "",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.encrypted": "0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.osd_id": "0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.type": "block",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.vdo": "0"
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            },
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "type": "block",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "vg_name": "ceph_vg0"
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:        }
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:    ],
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:    "1": [
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:        {
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "devices": [
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "/dev/loop4"
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            ],
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_name": "ceph_lv1",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_size": "21470642176",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "name": "ceph_lv1",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "tags": {
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.cluster_name": "ceph",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.crush_device_class": "",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.encrypted": "0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.osd_id": "1",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.type": "block",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.vdo": "0"
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            },
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "type": "block",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "vg_name": "ceph_vg1"
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:        }
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:    ],
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:    "2": [
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:        {
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "devices": [
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "/dev/loop5"
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            ],
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_name": "ceph_lv2",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_size": "21470642176",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "name": "ceph_lv2",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "tags": {
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.cluster_name": "ceph",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.crush_device_class": "",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.encrypted": "0",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.osd_id": "2",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.type": "block",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:                "ceph.vdo": "0"
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            },
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "type": "block",
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:            "vg_name": "ceph_vg2"
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:        }
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]:    ]
Dec  3 18:23:58 compute-0 nostalgic_meninsky[349463]: }
Dec  3 18:23:58 compute-0 systemd[1]: libpod-0448e314f7d3ef0cbed9a0d5b42eb3ebe1a9a469f578fb1032935e4760958943.scope: Deactivated successfully.
Dec  3 18:23:58 compute-0 podman[349446]: 2025-12-03 18:23:58.613212416 +0000 UTC m=+1.123706513 container died 0448e314f7d3ef0cbed9a0d5b42eb3ebe1a9a469f578fb1032935e4760958943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-28aea0513a91602f37e9264b9090e5fc06e131f6eef744bd9f58d71d66c34f4f-merged.mount: Deactivated successfully.
Dec  3 18:23:59 compute-0 podman[349446]: 2025-12-03 18:23:59.237429757 +0000 UTC m=+1.747923854 container remove 0448e314f7d3ef0cbed9a0d5b42eb3ebe1a9a469f578fb1032935e4760958943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:23:59 compute-0 systemd[1]: libpod-conmon-0448e314f7d3ef0cbed9a0d5b42eb3ebe1a9a469f578fb1032935e4760958943.scope: Deactivated successfully.
Dec  3 18:23:59 compute-0 python3.9[349661]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 18:23:59 compute-0 systemd[1]: Reloading.
Dec  3 18:23:59 compute-0 podman[158200]: time="2025-12-03T18:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:23:59 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:23:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:23:59 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:23:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8125 "" "Go-http-client/1.1"
Dec  3 18:24:00 compute-0 podman[349816]: 2025-12-03 18:24:00.100729488 +0000 UTC m=+0.055525724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:24:00 compute-0 podman[349816]: 2025-12-03 18:24:00.419153083 +0000 UTC m=+0.373949339 container create 36c4a690f0e7c785447b66ae551b246cd1fe93b18cccc9e822e107fb14aea0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_torvalds, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:24:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:00 compute-0 systemd[1]: Started libpod-conmon-36c4a690f0e7c785447b66ae551b246cd1fe93b18cccc9e822e107fb14aea0a4.scope.
Dec  3 18:24:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:24:00 compute-0 podman[349816]: 2025-12-03 18:24:00.650856213 +0000 UTC m=+0.605652449 container init 36c4a690f0e7c785447b66ae551b246cd1fe93b18cccc9e822e107fb14aea0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:24:00 compute-0 podman[349816]: 2025-12-03 18:24:00.668190766 +0000 UTC m=+0.622987022 container start 36c4a690f0e7c785447b66ae551b246cd1fe93b18cccc9e822e107fb14aea0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_torvalds, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:24:00 compute-0 gallant_torvalds[349909]: 167 167
Dec  3 18:24:00 compute-0 systemd[1]: libpod-36c4a690f0e7c785447b66ae551b246cd1fe93b18cccc9e822e107fb14aea0a4.scope: Deactivated successfully.
Dec  3 18:24:00 compute-0 conmon[349909]: conmon 36c4a690f0e7c785447b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36c4a690f0e7c785447b66ae551b246cd1fe93b18cccc9e822e107fb14aea0a4.scope/container/memory.events
Dec  3 18:24:00 compute-0 podman[349816]: 2025-12-03 18:24:00.746134126 +0000 UTC m=+0.700930362 container attach 36c4a690f0e7c785447b66ae551b246cd1fe93b18cccc9e822e107fb14aea0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 18:24:00 compute-0 podman[349816]: 2025-12-03 18:24:00.746430974 +0000 UTC m=+0.701227180 container died 36c4a690f0e7c785447b66ae551b246cd1fe93b18cccc9e822e107fb14aea0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_torvalds, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:24:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-51911f11712d8efd10bc8e14eda03645e9417ac8645adb9e2891b37c372366ce-merged.mount: Deactivated successfully.
Dec  3 18:24:00 compute-0 podman[349816]: 2025-12-03 18:24:00.793048651 +0000 UTC m=+0.747844867 container remove 36c4a690f0e7c785447b66ae551b246cd1fe93b18cccc9e822e107fb14aea0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_torvalds, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 18:24:00 compute-0 systemd[1]: libpod-conmon-36c4a690f0e7c785447b66ae551b246cd1fe93b18cccc9e822e107fb14aea0a4.scope: Deactivated successfully.
Dec  3 18:24:01 compute-0 podman[349932]: 2025-12-03 18:24:01.0083057 +0000 UTC m=+0.041931264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:24:01 compute-0 openstack_network_exporter[160319]: ERROR   18:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:24:01 compute-0 openstack_network_exporter[160319]: ERROR   18:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:24:01 compute-0 openstack_network_exporter[160319]: ERROR   18:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:24:01 compute-0 openstack_network_exporter[160319]: ERROR   18:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:24:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:24:01 compute-0 openstack_network_exporter[160319]: ERROR   18:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:24:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:24:01 compute-0 podman[349932]: 2025-12-03 18:24:01.519555047 +0000 UTC m=+0.553180591 container create 38c23294fb4492a0b59ce96e76b81918807b2f81b6793616b311ee4d422f1476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:24:01 compute-0 systemd[1]: Started libpod-conmon-38c23294fb4492a0b59ce96e76b81918807b2f81b6793616b311ee4d422f1476.scope.
Dec  3 18:24:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9070d7c94b8b4336560068df5cdca7cd91aa70ab988c174444ed0d4bbe4a99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9070d7c94b8b4336560068df5cdca7cd91aa70ab988c174444ed0d4bbe4a99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9070d7c94b8b4336560068df5cdca7cd91aa70ab988c174444ed0d4bbe4a99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9070d7c94b8b4336560068df5cdca7cd91aa70ab988c174444ed0d4bbe4a99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:24:01 compute-0 podman[349932]: 2025-12-03 18:24:01.934948065 +0000 UTC m=+0.968573609 container init 38c23294fb4492a0b59ce96e76b81918807b2f81b6793616b311ee4d422f1476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 18:24:01 compute-0 podman[349932]: 2025-12-03 18:24:01.954067552 +0000 UTC m=+0.987693096 container start 38c23294fb4492a0b59ce96e76b81918807b2f81b6793616b311ee4d422f1476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 18:24:01 compute-0 python3.9[350018]: ansible-ansible.builtin.service_facts Invoked
Dec  3 18:24:02 compute-0 network[350042]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 18:24:02 compute-0 network[350043]: 'network-scripts' will be removed from distribution in near future.
Dec  3 18:24:02 compute-0 network[350044]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 18:24:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:02 compute-0 podman[349932]: 2025-12-03 18:24:02.207267166 +0000 UTC m=+1.240892730 container attach 38c23294fb4492a0b59ce96e76b81918807b2f81b6793616b311ee4d422f1476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:24:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]: {
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "osd_id": 1,
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "type": "bluestore"
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:    },
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "osd_id": 2,
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "type": "bluestore"
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:    },
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "osd_id": 0,
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:        "type": "bluestore"
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]:    }
Dec  3 18:24:03 compute-0 naughty_hofstadter[350021]: }
Dec  3 18:24:03 compute-0 podman[349932]: 2025-12-03 18:24:03.151541501 +0000 UTC m=+2.185167055 container died 38c23294fb4492a0b59ce96e76b81918807b2f81b6793616b311ee4d422f1476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:24:03 compute-0 systemd[1]: libpod-38c23294fb4492a0b59ce96e76b81918807b2f81b6793616b311ee4d422f1476.scope: Deactivated successfully.
Dec  3 18:24:03 compute-0 systemd[1]: libpod-38c23294fb4492a0b59ce96e76b81918807b2f81b6793616b311ee4d422f1476.scope: Consumed 1.192s CPU time.
Dec  3 18:24:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f9070d7c94b8b4336560068df5cdca7cd91aa70ab988c174444ed0d4bbe4a99-merged.mount: Deactivated successfully.
Dec  3 18:24:03 compute-0 podman[349932]: 2025-12-03 18:24:03.475694866 +0000 UTC m=+2.509320410 container remove 38c23294fb4492a0b59ce96e76b81918807b2f81b6793616b311ee4d422f1476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:24:03 compute-0 systemd[1]: libpod-conmon-38c23294fb4492a0b59ce96e76b81918807b2f81b6793616b311ee4d422f1476.scope: Deactivated successfully.
Dec  3 18:24:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:24:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:24:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:24:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:24:03 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f2cfbb8b-14f7-4a2c-a176-ed607f4fb480 does not exist
Dec  3 18:24:03 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev aa8feb53-ae0b-496b-898b-cc0b90b98d17 does not exist
Dec  3 18:24:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:04 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:24:04 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:24:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:07 compute-0 python3.9[350409]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:24:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:08 compute-0 python3.9[350562]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:09 compute-0 podman[350686]: 2025-12-03 18:24:09.438187464 +0000 UTC m=+0.108233150 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  3 18:24:09 compute-0 podman[350690]: 2025-12-03 18:24:09.449300996 +0000 UTC m=+0.106601200 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:24:09 compute-0 podman[350702]: 2025-12-03 18:24:09.454828622 +0000 UTC m=+0.108239601 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, version=9.4, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Dec  3 18:24:09 compute-0 podman[350689]: 2025-12-03 18:24:09.456902133 +0000 UTC m=+0.112155356 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  3 18:24:09 compute-0 podman[350687]: 2025-12-03 18:24:09.468762013 +0000 UTC m=+0.138466721 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 18:24:09 compute-0 podman[350688]: 2025-12-03 18:24:09.488687461 +0000 UTC m=+0.154188666 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:24:09 compute-0 python3.9[350808]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:11 compute-0 python3.9[350982]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:24:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:13 compute-0 python3.9[351134]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 18:24:13 compute-0 nova_compute[348325]: 2025-12-03 18:24:13.535 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:24:13 compute-0 nova_compute[348325]: 2025-12-03 18:24:13.558 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:24:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:24:13
Dec  3 18:24:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:24:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:24:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['vms', 'images', '.mgr', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'backups']
Dec  3 18:24:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:24:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:24:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:24:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:24:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:24:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:24:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:24:14 compute-0 python3.9[351286]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 18:24:14 compute-0 systemd[1]: Reloading.
Dec  3 18:24:14 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 18:24:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:14 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 18:24:15 compute-0 python3.9[351472]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:24:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:16 compute-0 python3.9[351625]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:24:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:17 compute-0 python3.9[351775]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:24:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:18 compute-0 python3.9[351927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:19 compute-0 python3.9[352003]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:24:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:24:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5600 writes, 23K keys, 5600 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5600 writes, 872 syncs, 6.42 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fc1507add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fc1507add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  3 18:24:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:20 compute-0 python3.9[352156]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec  3 18:24:21 compute-0 python3.9[352308]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  3 18:24:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:24:23.310 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:24:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:24:23.312 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:24:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:24:23.312 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:24:23 compute-0 podman[352433]: 2025-12-03 18:24:23.590720102 +0000 UTC m=+0.087504373 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:24:23 compute-0 podman[352434]: 2025-12-03 18:24:23.621247729 +0000 UTC m=+0.118072391 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:24:23 compute-0 python3.9[352487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:24:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:24:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:24 compute-0 python3.9[352575]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:25 compute-0 python3.9[352725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:26 compute-0 python3.9[352801]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:27 compute-0 python3.9[352951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:27 compute-0 podman[353001]: 2025-12-03 18:24:27.630818645 +0000 UTC m=+0.130253920 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Dec  3 18:24:27 compute-0 python3.9[353043]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:24:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 6776 writes, 27K keys, 6776 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6776 writes, 1238 syncs, 5.47 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5562f64d34b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000239 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5562f64d34b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000239 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  3 18:24:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:28 compute-0 python3.9[353195]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:24:29 compute-0 python3.9[353347]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:24:29 compute-0 podman[158200]: time="2025-12-03T18:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:24:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:24:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8123 "" "Go-http-client/1.1"
Dec  3 18:24:30 compute-0 python3.9[353499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:31 compute-0 python3.9[353575]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json _original_basename=ceilometer-agent-compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:31 compute-0 openstack_network_exporter[160319]: ERROR   18:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:24:31 compute-0 openstack_network_exporter[160319]: ERROR   18:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:24:31 compute-0 openstack_network_exporter[160319]: ERROR   18:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:24:31 compute-0 openstack_network_exporter[160319]: ERROR   18:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:24:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:24:31 compute-0 openstack_network_exporter[160319]: ERROR   18:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:24:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:24:31 compute-0 python3.9[353725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:32 compute-0 python3.9[353801]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:33 compute-0 python3.9[353951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:33 compute-0 python3.9[354027]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json _original_basename=ceilometer_agent_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:24:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5706 writes, 24K keys, 5706 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5706 writes, 911 syncs, 6.26 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab999451f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab999451f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Dec  3 18:24:34 compute-0 python3.9[354177]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:35 compute-0 python3.9[354253]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:36 compute-0 python3.9[354403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:37 compute-0 python3.9[354479]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:37 compute-0 ceph-mgr[193091]: [devicehealth INFO root] Check health
Dec  3 18:24:38 compute-0 python3.9[354629]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:38 compute-0 python3.9[354705]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.json _original_basename=node_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:39 compute-0 python3.9[354855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:39 compute-0 podman[354861]: 2025-12-03 18:24:39.870388933 +0000 UTC m=+0.106241401 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:24:39 compute-0 podman[354859]: 2025-12-03 18:24:39.88414717 +0000 UTC m=+0.121135346 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Dec  3 18:24:39 compute-0 podman[354857]: 2025-12-03 18:24:39.886827785 +0000 UTC m=+0.133810546 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec  3 18:24:39 compute-0 podman[354868]: 2025-12-03 18:24:39.893925469 +0000 UTC m=+0.121801623 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:24:39 compute-0 podman[354869]: 2025-12-03 18:24:39.900697845 +0000 UTC m=+0.122826938 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, vcs-type=git, distribution-scope=public, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, release-0.7.12=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Dec  3 18:24:39 compute-0 podman[354860]: 2025-12-03 18:24:39.917781473 +0000 UTC m=+0.154423541 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 18:24:40 compute-0 python3.9[355050]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:41 compute-0 python3.9[355200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:41 compute-0 python3.9[355276]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json _original_basename=openstack_network_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:42 compute-0 python3.9[355426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:43 compute-0 python3.9[355502]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml _original_basename=openstack_network_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:24:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:24:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:24:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:24:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:24:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:24:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:44 compute-0 python3.9[355652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:45 compute-0 python3.9[355728]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.json _original_basename=podman_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:46 compute-0 python3.9[355878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:47 compute-0 python3.9[355954]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:48 compute-0 python3.9[356104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:49 compute-0 python3.9[356180]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:49 compute-0 nova_compute[348325]: 2025-12-03 18:24:49.490 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:24:49 compute-0 nova_compute[348325]: 2025-12-03 18:24:49.491 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:24:49 compute-0 nova_compute[348325]: 2025-12-03 18:24:49.491 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:24:49 compute-0 nova_compute[348325]: 2025-12-03 18:24:49.492 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:24:49 compute-0 python3.9[356331]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:50 compute-0 python3.9[356407]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:51 compute-0 python3.9[356557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:52 compute-0 python3.9[356633]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.309 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.311 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.311 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.312 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.313 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.314 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.314 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.315 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.316 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.342 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.342 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.342 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.342 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.343 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:24:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:24:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22721294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:24:52 compute-0 nova_compute[348325]: 2025-12-03 18:24:52.817 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:24:52 compute-0 python3.9[356805]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.151 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.153 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4582MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.153 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.154 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.249 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.250 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.271 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:24:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:24:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3223107058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.749 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.764 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.793 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.797 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:24:53 compute-0 nova_compute[348325]: 2025-12-03 18:24:53.798 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:24:53 compute-0 python3.9[356979]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:24:53 compute-0 podman[356983]: 2025-12-03 18:24:53.931042452 +0000 UTC m=+0.082264465 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:24:53 compute-0 podman[356982]: 2025-12-03 18:24:53.959346535 +0000 UTC m=+0.121027354 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd)
Dec  3 18:24:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:54 compute-0 python3.9[357174]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.588064) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786295588138, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1502, "num_deletes": 251, "total_data_size": 2428098, "memory_usage": 2468928, "flush_reason": "Manual Compaction"}
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786295603252, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2394522, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14816, "largest_seqno": 16317, "table_properties": {"data_size": 2387480, "index_size": 4113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14103, "raw_average_key_size": 19, "raw_value_size": 2373514, "raw_average_value_size": 3301, "num_data_blocks": 188, "num_entries": 719, "num_filter_entries": 719, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764786130, "oldest_key_time": 1764786130, "file_creation_time": 1764786295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 15242 microseconds, and 6118 cpu microseconds.
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.603313) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2394522 bytes OK
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.603340) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.606225) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.606250) EVENT_LOG_v1 {"time_micros": 1764786295606242, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.606274) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2421557, prev total WAL file size 2421557, number of live WAL files 2.
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.607817) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2338KB)], [35(6961KB)]
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786295607932, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9522836, "oldest_snapshot_seqno": -1}
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3985 keys, 7731945 bytes, temperature: kUnknown
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786295693053, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7731945, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7702944, "index_size": 17947, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97380, "raw_average_key_size": 24, "raw_value_size": 7628380, "raw_average_value_size": 1914, "num_data_blocks": 761, "num_entries": 3985, "num_filter_entries": 3985, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764786295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.693367) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7731945 bytes
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.696552) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.8 rd, 90.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.8 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 4499, records dropped: 514 output_compression: NoCompression
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.696582) EVENT_LOG_v1 {"time_micros": 1764786295696568, "job": 16, "event": "compaction_finished", "compaction_time_micros": 85209, "compaction_time_cpu_micros": 37832, "output_level": 6, "num_output_files": 1, "total_output_size": 7731945, "num_input_records": 4499, "num_output_records": 3985, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786295697674, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786295700657, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.607566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.700867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.700885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.700888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.700890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:24:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:24:55.700893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:24:56 compute-0 python3.9[357326]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:24:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:24:57 compute-0 python3.9[357480]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:57 compute-0 podman[357506]: 2025-12-03 18:24:57.934196011 +0000 UTC m=+0.093009179 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Dec  3 18:24:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:24:58 compute-0 python3.9[357576]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:24:59 compute-0 python3.9[357652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:24:59 compute-0 podman[158200]: time="2025-12-03T18:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:24:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:24:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8137 "" "Go-http-client/1.1"
Dec  3 18:25:00 compute-0 python3.9[357730]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:25:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:01 compute-0 openstack_network_exporter[160319]: ERROR   18:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:25:01 compute-0 openstack_network_exporter[160319]: ERROR   18:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:25:01 compute-0 openstack_network_exporter[160319]: ERROR   18:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:25:01 compute-0 openstack_network_exporter[160319]: ERROR   18:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:25:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:25:01 compute-0 openstack_network_exporter[160319]: ERROR   18:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:25:01 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:25:01 compute-0 python3.9[357882]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec  3 18:25:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:02 compute-0 python3.9[358034]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.705 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.707 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3f52673fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562c3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526739b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52671a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f562d33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526733b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f5271c3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526734d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f565c04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526735f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f526736b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3f52673fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3f522769f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3f5271c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3f5271c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3f5271c140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3f52673980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3f5271c1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3f52673a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3f52672390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3f526739e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3f5271c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3f5271c2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3f52671ca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3f52673470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3f5271c380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3f526734a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3f52671a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.719 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3f52673ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.719 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3f52673500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.719 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3f52673560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.720 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3f526735c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.720 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3f52673620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.720 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3f52673680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.721 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3f526736e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.721 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3f52673f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.722 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.722 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3f52673740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.722 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.722 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3f52673f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3f53833da0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.722 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:03 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:03.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:04 compute-0 python3[358191]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 18:25:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:04 compute-0 python3[358191]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d",#012          "Digest": "sha256:1810de77f8d2f3059c7cc377072be9f22a136bfbd0a3ad4f08539090d9469fac",#012          "RepoTags": [#012               "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested"#012          ],#012          "RepoDigests": [#012               "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute@sha256:1810de77f8d2f3059c7cc377072be9f22a136bfbd0a3ad4f08539090d9469fac"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-12-01T05:11:05.921630712Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.4",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251125",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 10 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "3a7876c5b6a4ff2e2bc50e11e9db5f42",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 601995467,#012          "VirtualSize": 601995467,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/586629c35ab12bf3c21aa8405321e52ee8dc3eb91fe319ec2e2bcffcf2f07750/diff:/var/lib/containers/storage/overlay/b726b38a9994fb8597c31b02de6a7067e1e6010e18192135f063d07cbad1efce/diff:/var/lib/containers/storage/overlay/816b6cf07292074c7d459b3269e12ec5823a680369545863b4ff246f9cf897b1/diff:/var/lib/containers/storage/overlay/9cbc2db18be2b6332ac66757d2050c04af51f422021105d6d3edc0bda0b8515c/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/d27b7d7dfa077a19fa71a8e66da1979beb59cc810756e543817991e757a42a46/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/d27b7d7dfa077a19fa71a8e66da1979beb59cc810756e543817991e757a42a46/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:9cbc2db18be2b6332ac66757d2050c04af51f422021105d6d3edc0bda0b8515c",#012                    "sha256:4b40c712f1bd18fdb2c50c6adb38e6952f9d174873260f311696915f181f9947",#012                    "sha256:eaeeda82071109aa7bb6c3500cc7a126797ce0a53bc0f8828831aba88104203b",#012                    "sha256:c58c65fadb00ed08655f756d68fed13f115faec2bc2384f51ce46e18334fe2ae",#012                    "sha256:2f6d51b7d12dca1a77173f044cfb4b6a796a560f1015e515fa8ee8a14f36c103"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.4",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251125",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 10 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "3a7876c5b6a4ff2e2bc50e11e9db5f42",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-11-25T03:00:15.634483436Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:c435edaaf9833341bf9650d5dcfda033191519e1d9c91ecfa082699fd3e149e4 in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T03:00:15.634561379Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 10 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T03:00:18.392267297Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-12-01T05:03:54.682983025Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream10",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:54.683002525Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:54.683016626Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:54.683029656Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:54.683039096Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:54.683051027Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:55.032223959Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:55.512889527Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/centos.repo\" ]; then rm -f /etc/yum.repos.d/centos*.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:04:06.648921904Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && cr
Dec  3 18:25:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:25:04 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:25:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:25:04 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:25:05 compute-0 python3.9[358627]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:25:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:25:05 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:25:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:25:05 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:25:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:25:05 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:25:05 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 39923368-ee24-473b-adee-f0dda02b8782 does not exist
Dec  3 18:25:05 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b2cf6cb5-3bc6-44d8-899a-5613bcfa2efc does not exist
Dec  3 18:25:05 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 80ed1511-e29e-4d50-88d0-2f66abe8e994 does not exist
Dec  3 18:25:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:25:05 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:25:05 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:25:05 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:25:05 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:25:05 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:25:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:25:05 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:25:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:25:05 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:25:06 compute-0 python3.9[358908]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:25:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:06 compute-0 podman[358953]: 2025-12-03 18:25:06.600054192 +0000 UTC m=+0.042625814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:25:07 compute-0 podman[358953]: 2025-12-03 18:25:07.634056805 +0000 UTC m=+1.076628417 container create a39e04cdc77e630f38822957a31a092f9957d80e92baca64acea405862a205d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_davinci, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 18:25:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:07 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:25:07 compute-0 systemd[1]: Started libpod-conmon-a39e04cdc77e630f38822957a31a092f9957d80e92baca64acea405862a205d8.scope.
Dec  3 18:25:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:07 compute-0 python3.9[359103]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764786306.5646045-484-203257553919719/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:25:07 compute-0 podman[358953]: 2025-12-03 18:25:07.782092228 +0000 UTC m=+1.224663820 container init a39e04cdc77e630f38822957a31a092f9957d80e92baca64acea405862a205d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_davinci, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:25:07 compute-0 podman[358953]: 2025-12-03 18:25:07.792163325 +0000 UTC m=+1.234734897 container start a39e04cdc77e630f38822957a31a092f9957d80e92baca64acea405862a205d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:25:07 compute-0 podman[358953]: 2025-12-03 18:25:07.796955022 +0000 UTC m=+1.239526614 container attach a39e04cdc77e630f38822957a31a092f9957d80e92baca64acea405862a205d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:25:07 compute-0 compassionate_davinci[359107]: 167 167
Dec  3 18:25:07 compute-0 systemd[1]: libpod-a39e04cdc77e630f38822957a31a092f9957d80e92baca64acea405862a205d8.scope: Deactivated successfully.
Dec  3 18:25:07 compute-0 podman[358953]: 2025-12-03 18:25:07.79931608 +0000 UTC m=+1.241887662 container died a39e04cdc77e630f38822957a31a092f9957d80e92baca64acea405862a205d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_davinci, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-13d24fee38f91f2b6ba62af2457e72d70990258ca932a33221ffa466b0ca84e5-merged.mount: Deactivated successfully.
Dec  3 18:25:07 compute-0 podman[358953]: 2025-12-03 18:25:07.851388965 +0000 UTC m=+1.293960537 container remove a39e04cdc77e630f38822957a31a092f9957d80e92baca64acea405862a205d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_davinci, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 18:25:07 compute-0 systemd[1]: libpod-conmon-a39e04cdc77e630f38822957a31a092f9957d80e92baca64acea405862a205d8.scope: Deactivated successfully.
Dec  3 18:25:08 compute-0 podman[359130]: 2025-12-03 18:25:08.030708365 +0000 UTC m=+0.057712874 container create a8c77c4545eda56a98a81cbf87d1a10cd620c0d626fd8b1cbaefa0fced5bb114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:25:08 compute-0 podman[359130]: 2025-12-03 18:25:08.00680372 +0000 UTC m=+0.033808259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:25:08 compute-0 systemd[1]: Started libpod-conmon-a8c77c4545eda56a98a81cbf87d1a10cd620c0d626fd8b1cbaefa0fced5bb114.scope.
Dec  3 18:25:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8adf7876c39d53dcd3ccf729f0ff8dc6c6b8468595e60ca752b3e820b05258/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8adf7876c39d53dcd3ccf729f0ff8dc6c6b8468595e60ca752b3e820b05258/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8adf7876c39d53dcd3ccf729f0ff8dc6c6b8468595e60ca752b3e820b05258/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8adf7876c39d53dcd3ccf729f0ff8dc6c6b8468595e60ca752b3e820b05258/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8adf7876c39d53dcd3ccf729f0ff8dc6c6b8468595e60ca752b3e820b05258/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:08 compute-0 podman[359130]: 2025-12-03 18:25:08.18495272 +0000 UTC m=+0.211957259 container init a8c77c4545eda56a98a81cbf87d1a10cd620c0d626fd8b1cbaefa0fced5bb114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:25:08 compute-0 podman[359130]: 2025-12-03 18:25:08.202272374 +0000 UTC m=+0.229276873 container start a8c77c4545eda56a98a81cbf87d1a10cd620c0d626fd8b1cbaefa0fced5bb114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:25:08 compute-0 podman[359130]: 2025-12-03 18:25:08.208578819 +0000 UTC m=+0.235583318 container attach a8c77c4545eda56a98a81cbf87d1a10cd620c0d626fd8b1cbaefa0fced5bb114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:25:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:09 compute-0 python3.9[359233]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:25:09 compute-0 busy_albattani[359146]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:25:09 compute-0 busy_albattani[359146]: --> relative data size: 1.0
Dec  3 18:25:09 compute-0 busy_albattani[359146]: --> All data devices are unavailable
Dec  3 18:25:09 compute-0 systemd[1]: libpod-a8c77c4545eda56a98a81cbf87d1a10cd620c0d626fd8b1cbaefa0fced5bb114.scope: Deactivated successfully.
Dec  3 18:25:09 compute-0 podman[359130]: 2025-12-03 18:25:09.334565953 +0000 UTC m=+1.361570442 container died a8c77c4545eda56a98a81cbf87d1a10cd620c0d626fd8b1cbaefa0fced5bb114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:25:09 compute-0 systemd[1]: libpod-a8c77c4545eda56a98a81cbf87d1a10cd620c0d626fd8b1cbaefa0fced5bb114.scope: Consumed 1.046s CPU time.
Dec  3 18:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a8adf7876c39d53dcd3ccf729f0ff8dc6c6b8468595e60ca752b3e820b05258-merged.mount: Deactivated successfully.
Dec  3 18:25:09 compute-0 podman[359130]: 2025-12-03 18:25:09.413636379 +0000 UTC m=+1.440640868 container remove a8c77c4545eda56a98a81cbf87d1a10cd620c0d626fd8b1cbaefa0fced5bb114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:25:09 compute-0 systemd[1]: libpod-conmon-a8c77c4545eda56a98a81cbf87d1a10cd620c0d626fd8b1cbaefa0fced5bb114.scope: Deactivated successfully.
Dec  3 18:25:10 compute-0 podman[359501]: 2025-12-03 18:25:10.039171242 +0000 UTC m=+0.107121083 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, name=ubi9, version=9.4, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 18:25:10 compute-0 podman[359488]: 2025-12-03 18:25:10.042426142 +0000 UTC m=+0.128379714 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:25:10 compute-0 podman[359494]: 2025-12-03 18:25:10.049763992 +0000 UTC m=+0.122575262 container health_status f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:25:10 compute-0 podman[359491]: 2025-12-03 18:25:10.054145329 +0000 UTC m=+0.130130437 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Dec  3 18:25:10 compute-0 podman[359489]: 2025-12-03 18:25:10.054379134 +0000 UTC m=+0.134454612 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  3 18:25:10 compute-0 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-5faf5f5157d672c5.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 18:25:10 compute-0 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-5faf5f5157d672c5.service: Failed with result 'exit-code'.
Dec  3 18:25:10 compute-0 podman[359601]: 2025-12-03 18:25:10.155215803 +0000 UTC m=+0.136150904 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 18:25:10 compute-0 podman[359674]: 2025-12-03 18:25:10.262694114 +0000 UTC m=+0.053432229 container create 4a916755e3af454bc1f53d9156dbca20d3bbb136d8f3260cb65f0bdea8093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:25:10 compute-0 systemd[1]: Started libpod-conmon-4a916755e3af454bc1f53d9156dbca20d3bbb136d8f3260cb65f0bdea8093fec.scope.
Dec  3 18:25:10 compute-0 podman[359674]: 2025-12-03 18:25:10.240156773 +0000 UTC m=+0.030894918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:25:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:10 compute-0 podman[359674]: 2025-12-03 18:25:10.371385725 +0000 UTC m=+0.162123860 container init 4a916755e3af454bc1f53d9156dbca20d3bbb136d8f3260cb65f0bdea8093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_morse, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:25:10 compute-0 python3.9[359619]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 18:25:10 compute-0 podman[359674]: 2025-12-03 18:25:10.385146712 +0000 UTC m=+0.175884827 container start 4a916755e3af454bc1f53d9156dbca20d3bbb136d8f3260cb65f0bdea8093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_morse, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:25:10 compute-0 peaceful_morse[359690]: 167 167
Dec  3 18:25:10 compute-0 systemd[1]: libpod-4a916755e3af454bc1f53d9156dbca20d3bbb136d8f3260cb65f0bdea8093fec.scope: Deactivated successfully.
Dec  3 18:25:10 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Dec  3 18:25:10 compute-0 podman[359674]: 2025-12-03 18:25:10.428226096 +0000 UTC m=+0.218964231 container attach 4a916755e3af454bc1f53d9156dbca20d3bbb136d8f3260cb65f0bdea8093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 18:25:10 compute-0 podman[359674]: 2025-12-03 18:25:10.428672428 +0000 UTC m=+0.219410563 container died 4a916755e3af454bc1f53d9156dbca20d3bbb136d8f3260cb65f0bdea8093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_morse, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:25:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-788450e19851c03b7d52ff6649203dcba5c5e366c4c25794a5f6f345a7ed806a-merged.mount: Deactivated successfully.
Dec  3 18:25:10 compute-0 podman[359674]: 2025-12-03 18:25:10.534110309 +0000 UTC m=+0.324848424 container remove 4a916755e3af454bc1f53d9156dbca20d3bbb136d8f3260cb65f0bdea8093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:25:10 compute-0 systemd[1]: libpod-conmon-4a916755e3af454bc1f53d9156dbca20d3bbb136d8f3260cb65f0bdea8093fec.scope: Deactivated successfully.
Dec  3 18:25:10 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:10.646 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  3 18:25:10 compute-0 podman[359728]: 2025-12-03 18:25:10.741874245 +0000 UTC m=+0.074285190 container create b6316e740311fc43a3e5a3a17681497a7c4f5a00828034c636b515e4ca745313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_roentgen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:25:10 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:10.748 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec  3 18:25:10 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:10.749 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec  3 18:25:10 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:10.749 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec  3 18:25:10 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:10.749 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec  3 18:25:10 compute-0 ceilometer_agent_compute[154682]: 2025-12-03 18:25:10.765 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec  3 18:25:10 compute-0 virtqemud[138705]: End of file while reading data: Input/output error
Dec  3 18:25:10 compute-0 virtqemud[138705]: End of file while reading data: Input/output error
Dec  3 18:25:10 compute-0 podman[359728]: 2025-12-03 18:25:10.707665258 +0000 UTC m=+0.040076283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:25:10 compute-0 systemd[1]: libpod-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope: Deactivated successfully.
Dec  3 18:25:10 compute-0 systemd[1]: libpod-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope: Consumed 3.557s CPU time.
Dec  3 18:25:11 compute-0 podman[359704]: 2025-12-03 18:25:11.068038339 +0000 UTC m=+0.623437543 container died ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 18:25:11 compute-0 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-5faf5f5157d672c5.timer: Deactivated successfully.
Dec  3 18:25:11 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.
Dec  3 18:25:11 compute-0 systemd[1]: Started libpod-conmon-b6316e740311fc43a3e5a3a17681497a7c4f5a00828034c636b515e4ca745313.scope.
Dec  3 18:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-userdata-shm.mount: Deactivated successfully.
Dec  3 18:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f-merged.mount: Deactivated successfully.
Dec  3 18:25:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d1c17cd3577cb54eebe1099e4b0aee4182f16d1aee1e0383c7d6b417006ed4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d1c17cd3577cb54eebe1099e4b0aee4182f16d1aee1e0383c7d6b417006ed4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d1c17cd3577cb54eebe1099e4b0aee4182f16d1aee1e0383c7d6b417006ed4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d1c17cd3577cb54eebe1099e4b0aee4182f16d1aee1e0383c7d6b417006ed4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:11 compute-0 podman[359728]: 2025-12-03 18:25:11.238742368 +0000 UTC m=+0.571153393 container init b6316e740311fc43a3e5a3a17681497a7c4f5a00828034c636b515e4ca745313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 18:25:11 compute-0 podman[359704]: 2025-12-03 18:25:11.241535757 +0000 UTC m=+0.796934951 container cleanup ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 18:25:11 compute-0 podman[359704]: ceilometer_agent_compute
Dec  3 18:25:11 compute-0 podman[359728]: 2025-12-03 18:25:11.248885096 +0000 UTC m=+0.581296071 container start b6316e740311fc43a3e5a3a17681497a7c4f5a00828034c636b515e4ca745313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:25:11 compute-0 podman[359728]: 2025-12-03 18:25:11.256170255 +0000 UTC m=+0.588581230 container attach b6316e740311fc43a3e5a3a17681497a7c4f5a00828034c636b515e4ca745313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_roentgen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:25:11 compute-0 podman[359763]: ceilometer_agent_compute
Dec  3 18:25:11 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec  3 18:25:11 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Dec  3 18:25:11 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec  3 18:25:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38351f0e2a6fe78d67a73cd28e17977909b19ff2624c534b1150afc17b83258f/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.
Dec  3 18:25:11 compute-0 podman[359776]: 2025-12-03 18:25:11.664089831 +0000 UTC m=+0.241730629 container init ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: + sudo -E kolla_set_configs
Dec  3 18:25:11 compute-0 podman[359776]: 2025-12-03 18:25:11.699659861 +0000 UTC m=+0.277300609 container start ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: sudo: unable to send audit message: Operation not permitted
Dec  3 18:25:11 compute-0 podman[359776]: ceilometer_agent_compute
Dec  3 18:25:11 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Validating config file
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Copying service configuration files
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: INFO:__main__:Writing out command to execute
Dec  3 18:25:11 compute-0 podman[359797]: 2025-12-03 18:25:11.800012478 +0000 UTC m=+0.085925214 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: ++ cat /run_command
Dec  3 18:25:11 compute-0 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-774d997bb9b7e5b.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 18:25:11 compute-0 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-774d997bb9b7e5b.service: Failed with result 'exit-code'.
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: + ARGS=
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: + sudo kolla_copy_cacerts
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: sudo: unable to send audit message: Operation not permitted
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: + [[ ! -n '' ]]
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: + . kolla_extend_start
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: + umask 0022
Dec  3 18:25:11 compute-0 ceilometer_agent_compute[359790]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  3 18:25:12 compute-0 modest_roentgen[359757]: {
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:    "0": [
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:        {
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "devices": [
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "/dev/loop3"
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            ],
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_name": "ceph_lv0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_size": "21470642176",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "name": "ceph_lv0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "tags": {
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.cluster_name": "ceph",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.crush_device_class": "",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.encrypted": "0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.osd_id": "0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.type": "block",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.vdo": "0"
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            },
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "type": "block",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "vg_name": "ceph_vg0"
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:        }
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:    ],
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:    "1": [
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:        {
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "devices": [
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "/dev/loop4"
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            ],
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_name": "ceph_lv1",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_size": "21470642176",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "name": "ceph_lv1",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "tags": {
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.cluster_name": "ceph",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.crush_device_class": "",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.encrypted": "0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.osd_id": "1",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.type": "block",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.vdo": "0"
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            },
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "type": "block",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "vg_name": "ceph_vg1"
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:        }
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:    ],
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:    "2": [
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:        {
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "devices": [
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "/dev/loop5"
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            ],
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_name": "ceph_lv2",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_size": "21470642176",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "name": "ceph_lv2",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "tags": {
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.cluster_name": "ceph",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.crush_device_class": "",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.encrypted": "0",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.osd_id": "2",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.type": "block",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:                "ceph.vdo": "0"
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            },
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "type": "block",
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:            "vg_name": "ceph_vg2"
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:        }
Dec  3 18:25:12 compute-0 modest_roentgen[359757]:    ]
Dec  3 18:25:12 compute-0 modest_roentgen[359757]: }
Dec  3 18:25:12 compute-0 systemd[1]: libpod-b6316e740311fc43a3e5a3a17681497a7c4f5a00828034c636b515e4ca745313.scope: Deactivated successfully.
Dec  3 18:25:12 compute-0 podman[359728]: 2025-12-03 18:25:12.133677227 +0000 UTC m=+1.466088172 container died b6316e740311fc43a3e5a3a17681497a7c4f5a00828034c636b515e4ca745313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_roentgen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-26d1c17cd3577cb54eebe1099e4b0aee4182f16d1aee1e0383c7d6b417006ed4-merged.mount: Deactivated successfully.
Dec  3 18:25:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:12 compute-0 podman[359728]: 2025-12-03 18:25:12.480982359 +0000 UTC m=+1.813393294 container remove b6316e740311fc43a3e5a3a17681497a7c4f5a00828034c636b515e4ca745313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:25:12 compute-0 systemd[1]: libpod-conmon-b6316e740311fc43a3e5a3a17681497a7c4f5a00828034c636b515e4ca745313.scope: Deactivated successfully.
Dec  3 18:25:12 compute-0 python3.9[359986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:25:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.934 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.934 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.934 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.934 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.935 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.935 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.935 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.935 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.935 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.935 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.935 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.936 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.936 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.936 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.936 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.936 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.936 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.936 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.937 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.937 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.937 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.937 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.937 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.937 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.938 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.938 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.938 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.938 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.938 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.938 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.938 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.938 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.939 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.939 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.939 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.939 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.939 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.939 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.939 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.939 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.939 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.940 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.940 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.940 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.940 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.940 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.940 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.940 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.940 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.940 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.941 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.941 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.941 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.941 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.941 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.941 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.941 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.941 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.941 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.941 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.942 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.942 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.942 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.942 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.942 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.942 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.942 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.942 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.943 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.943 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.943 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.943 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.943 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.943 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.943 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.943 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.944 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.944 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.944 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.944 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.944 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.944 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.944 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.944 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.944 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.945 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.945 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.945 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.945 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.945 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.945 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.945 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.945 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.945 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.945 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.946 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.946 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.946 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.946 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.946 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.946 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.946 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.946 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.946 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.947 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.947 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.947 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.947 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.947 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.947 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.947 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.947 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.947 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.948 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.948 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.948 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.948 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.948 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.948 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.948 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.948 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.948 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.949 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.949 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.949 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.949 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.949 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.949 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.949 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.949 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.949 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.949 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.949 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.950 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.951 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.951 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.951 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.951 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.951 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.951 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.951 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.951 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.973 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.974 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.974 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.974 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.974 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.975 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.975 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.975 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.975 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.975 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.976 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.976 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.976 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.976 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.977 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.977 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.977 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.977 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.977 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.978 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.978 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.978 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.978 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.978 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.979 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.979 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.979 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.979 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.979 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.980 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.980 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.980 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.980 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.980 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.981 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.981 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.981 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.981 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.981 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.982 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.982 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.982 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.982 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.983 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.983 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.983 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.983 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.983 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.983 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.984 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.984 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.984 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.984 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.984 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.985 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.985 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.985 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.985 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.985 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.986 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.986 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.986 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.986 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.986 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.986 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.987 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.987 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.987 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.987 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.988 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.988 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.988 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.988 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.988 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.989 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.989 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.989 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.989 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.989 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.990 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.990 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.990 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.990 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.990 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.991 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.991 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.991 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.991 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.991 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.992 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.992 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.992 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.992 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.992 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.994 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.994 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.994 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.994 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.994 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.995 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.995 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.995 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.995 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.995 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.995 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.996 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.996 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.996 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.996 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.996 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.997 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.997 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.997 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.997 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.998 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.998 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.998 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.998 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.998 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.999 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.999 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.999 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:12 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.999 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.999 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:12.999 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.000 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.000 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.000 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.000 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.001 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.001 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.001 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.002 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.002 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.002 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.002 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.002 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.002 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.003 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.003 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.003 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.003 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.004 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.004 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.004 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.004 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.005 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.005 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.005 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.005 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.007 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.010 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.011 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.018 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.018 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.018 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  3 18:25:13 compute-0 python3.9[360165]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/node_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/node_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.192 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.192 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.192 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.192 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.192 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.193 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.193 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.193 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.193 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.193 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.193 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.193 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.193 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.194 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.194 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.194 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.194 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.194 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.194 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.194 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.195 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.195 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.195 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.195 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.195 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.195 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.195 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.195 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.195 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.196 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.196 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.196 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.196 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.196 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.196 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.196 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.196 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.196 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.197 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.197 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.197 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.197 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.197 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.197 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.197 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.197 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.197 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.198 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.198 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.198 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.198 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.198 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.198 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.198 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.198 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.199 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.199 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.199 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.199 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.199 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.199 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.199 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.199 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.199 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.200 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.200 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.200 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.200 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.200 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.200 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.200 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.200 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.200 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.200 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.200 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.201 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.201 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.201 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.201 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.201 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.201 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.201 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.201 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.202 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.202 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.202 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.202 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.202 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.202 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.202 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.202 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.203 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.203 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.203 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.203 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.203 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.203 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.203 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.203 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.203 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.203 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.204 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.204 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.204 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.204 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.204 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.204 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.204 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.204 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.204 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.205 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.205 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.205 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.205 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.205 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.205 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.205 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.205 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.205 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.205 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.206 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.207 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.208 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.208 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.208 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.208 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.208 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.208 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.208 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.208 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.208 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.208 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.208 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.209 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.209 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.209 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.209 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.209 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.209 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.209 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.209 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.209 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.209 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.210 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.210 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.210 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.210 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.214 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.242 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.242 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.243 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.243 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.244 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.244 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.244 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.244 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.244 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.245 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.245 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.245 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.245 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.245 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.245 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.245 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.245 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.248 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8ef7cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.250 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.250 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.250 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.250 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.250 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.250 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.250 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.250 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.251 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.251 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.251 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.251 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.251 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.254 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.254 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.254 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.254 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.254 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.256 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.256 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.256 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.256 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.256 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.256 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.256 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.256 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.256 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.256 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:25:13.256 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:25:13 compute-0 podman[360229]: 2025-12-03 18:25:13.246177113 +0000 UTC m=+0.061315212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:25:13 compute-0 podman[360229]: 2025-12-03 18:25:13.448709961 +0000 UTC m=+0.263848060 container create b008456ff47806d45d088e0615869aca4098b149c02c53efb71305159775071f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 18:25:13 compute-0 systemd[1]: Started libpod-conmon-b008456ff47806d45d088e0615869aca4098b149c02c53efb71305159775071f.scope.
Dec  3 18:25:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:13 compute-0 podman[360229]: 2025-12-03 18:25:13.587372836 +0000 UTC m=+0.402510925 container init b008456ff47806d45d088e0615869aca4098b149c02c53efb71305159775071f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:25:13 compute-0 podman[360229]: 2025-12-03 18:25:13.598328994 +0000 UTC m=+0.413467083 container start b008456ff47806d45d088e0615869aca4098b149c02c53efb71305159775071f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:25:13 compute-0 relaxed_cray[360272]: 167 167
Dec  3 18:25:13 compute-0 systemd[1]: libpod-b008456ff47806d45d088e0615869aca4098b149c02c53efb71305159775071f.scope: Deactivated successfully.
Dec  3 18:25:13 compute-0 conmon[360272]: conmon b008456ff47806d45d08 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b008456ff47806d45d088e0615869aca4098b149c02c53efb71305159775071f.scope/container/memory.events
Dec  3 18:25:13 compute-0 podman[360229]: 2025-12-03 18:25:13.612716416 +0000 UTC m=+0.427854575 container attach b008456ff47806d45d088e0615869aca4098b149c02c53efb71305159775071f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:25:13 compute-0 podman[360229]: 2025-12-03 18:25:13.615070393 +0000 UTC m=+0.430208472 container died b008456ff47806d45d088e0615869aca4098b149c02c53efb71305159775071f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f06753820bdbd02ad283e4d3e1c8a51b13e8273d80b02a197120b958b5c3c97-merged.mount: Deactivated successfully.
Dec  3 18:25:13 compute-0 podman[360229]: 2025-12-03 18:25:13.685796095 +0000 UTC m=+0.500934164 container remove b008456ff47806d45d088e0615869aca4098b149c02c53efb71305159775071f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 18:25:13 compute-0 systemd[1]: libpod-conmon-b008456ff47806d45d088e0615869aca4098b149c02c53efb71305159775071f.scope: Deactivated successfully.
Dec  3 18:25:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:25:13
Dec  3 18:25:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:25:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:25:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'vms', 'images', '.rgw.root', 'cephfs.cephfs.data']
Dec  3 18:25:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:25:13 compute-0 podman[360340]: 2025-12-03 18:25:13.922245275 +0000 UTC m=+0.087697799 container create dd711253f10532e1d751ce58eb34a4a81c2dadc0e2569d3a2af0cfb4fe8343be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:25:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:25:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:25:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:25:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:25:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:25:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:25:13 compute-0 podman[360340]: 2025-12-03 18:25:13.878567006 +0000 UTC m=+0.044019560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:25:14 compute-0 systemd[1]: Started libpod-conmon-dd711253f10532e1d751ce58eb34a4a81c2dadc0e2569d3a2af0cfb4fe8343be.scope.
Dec  3 18:25:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def235eadebe70f06dd53fccb9cc3861cafcb9926dcc0cbe3a9093737dcb0f14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def235eadebe70f06dd53fccb9cc3861cafcb9926dcc0cbe3a9093737dcb0f14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def235eadebe70f06dd53fccb9cc3861cafcb9926dcc0cbe3a9093737dcb0f14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def235eadebe70f06dd53fccb9cc3861cafcb9926dcc0cbe3a9093737dcb0f14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:14 compute-0 podman[360340]: 2025-12-03 18:25:14.095728242 +0000 UTC m=+0.261180796 container init dd711253f10532e1d751ce58eb34a4a81c2dadc0e2569d3a2af0cfb4fe8343be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:25:14 compute-0 podman[360340]: 2025-12-03 18:25:14.103798369 +0000 UTC m=+0.269250903 container start dd711253f10532e1d751ce58eb34a4a81c2dadc0e2569d3a2af0cfb4fe8343be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:25:14 compute-0 podman[360340]: 2025-12-03 18:25:14.142412445 +0000 UTC m=+0.307864999 container attach dd711253f10532e1d751ce58eb34a4a81c2dadc0e2569d3a2af0cfb4fe8343be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:25:14 compute-0 python3.9[360425]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec  3 18:25:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]: {
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "osd_id": 1,
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "type": "bluestore"
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:    },
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "osd_id": 2,
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "type": "bluestore"
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:    },
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "osd_id": 0,
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:        "type": "bluestore"
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]:    }
Dec  3 18:25:15 compute-0 funny_visvesvaraya[360405]: }
Dec  3 18:25:15 compute-0 python3.9[360593]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 18:25:15 compute-0 systemd[1]: libpod-dd711253f10532e1d751ce58eb34a4a81c2dadc0e2569d3a2af0cfb4fe8343be.scope: Deactivated successfully.
Dec  3 18:25:15 compute-0 systemd[1]: libpod-dd711253f10532e1d751ce58eb34a4a81c2dadc0e2569d3a2af0cfb4fe8343be.scope: Consumed 1.150s CPU time.
Dec  3 18:25:15 compute-0 podman[360340]: 2025-12-03 18:25:15.264644508 +0000 UTC m=+1.430097052 container died dd711253f10532e1d751ce58eb34a4a81c2dadc0e2569d3a2af0cfb4fe8343be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 18:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-def235eadebe70f06dd53fccb9cc3861cafcb9926dcc0cbe3a9093737dcb0f14-merged.mount: Deactivated successfully.
Dec  3 18:25:15 compute-0 podman[360340]: 2025-12-03 18:25:15.760943347 +0000 UTC m=+1.926395871 container remove dd711253f10532e1d751ce58eb34a4a81c2dadc0e2569d3a2af0cfb4fe8343be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:25:15 compute-0 systemd[1]: libpod-conmon-dd711253f10532e1d751ce58eb34a4a81c2dadc0e2569d3a2af0cfb4fe8343be.scope: Deactivated successfully.
Dec  3 18:25:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:25:15 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:25:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:25:15 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:25:15 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 82864377-a1f0-447d-a3d7-8cfcbdefa310 does not exist
Dec  3 18:25:15 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev efaa8c93-5f68-4728-8552-1303205f458c does not exist
Dec  3 18:25:15 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:25:15 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:25:16 compute-0 python3[360822]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 18:25:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:16 compute-0 python3[360822]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83",#012          "Digest": "sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80",#012          "RepoTags": [#012               "quay.io/prometheus/node-exporter:v1.5.0"#012          ],#012          "RepoDigests": [#012               "quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c",#012               "quay.io/prometheus/node-exporter@sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2022-11-29T19:06:14.987394068Z",#012          "Config": {#012               "User": "nobody",#012               "ExposedPorts": {#012                    "9100/tcp": {}#012               },#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"#012               ],#012               "Entrypoint": [#012                    "/bin/node_exporter"#012               ],#012               "Labels": {#012                    "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"#012               }#012          },#012          "Version": "19.03.8",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 23851788,#012          "VirtualSize": 23851788,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c/diff:/var/lib/containers/storage/overlay/0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8",#012                    "sha256:9f2d25037e3e722ca7f4ca9c7a885f19a2ce11140592ee0acb323dec3b26640d",#012                    "sha256:76857a93cd03e12817c36c667cc3263d58886232cad116327e55d79036e5977d"#012               ]#012          },#012          "Labels": {#012               "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "nobody",#012          "History": [#012               {#012                    "created": "2022-10-26T06:30:33.700079457Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:5e991de3200129dc05c3130f7a64bebb5704486b4f773bfcaa6b13165d6c2416 in / "#012               },#012               {#012                    "created": "2022-10-26T06:30:33.794221299Z",#012                    "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-15T10:54:54.845364304Z",#012                    "created_by": "/bin/sh -c #(nop)  MAINTAINER The Prometheus Authors <prometheus-developers@googlegroups.com>",#012                    "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-15T10:54:55.54866664Z",#012                    "created_by": "/bin/sh -c #(nop) COPY dir:02c961e21531be78a67ed9bad67d03391cfedcead8b0a35cfb9171346636f11a in / ",#012                    "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>"#012               },#012               {#012                    "created": "2022-11-29T19:06:13.622645057Z",#012                    "created_by": "/bin/sh -c #(nop)  LABEL maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:13.810765105Z",#012                    "created_by": "/bin/sh -c #(nop)  ARG ARCH=amd64",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:13.990897895Z",#012                    "created_by": "/bin/sh -c #(nop)  ARG OS=linux",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:14.358293759Z",#012                    "created_by": "/bin/sh -c #(nop) COPY file:3ef20dd145817033186947b860c3b6f7bb06d4c435257258c0e5df15f6e51eb7 in /bin/node_exporter "#012               },#012               {#012                    "created": "2022-11-29T19:06:14.630644274Z",#012                    "created_by": "/bin/sh -c #(nop)  EXPOSE 9100",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:14.79596292Z",#012                    "created_by": "/bin/sh -c #(nop)  USER nobody",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:14.987394068Z",#012                    "created_by": "/bin/sh -c #(nop)  ENTRYPOINT [\"/bin/node_exporter\"]",#012                    "empty_layer": true#012               }#012          ],#012          "NamesHistory": [#012               "quay.io/prometheus/node-exporter:v1.5.0"#012          ]#012     }#012]#012: quay.io/prometheus/node-exporter:v1.5.0
Dec  3 18:25:16 compute-0 systemd[1]: libpod-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.scope: Deactivated successfully.
Dec  3 18:25:16 compute-0 systemd[1]: libpod-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.scope: Consumed 5.010s CPU time.
Dec  3 18:25:16 compute-0 podman[360870]: 2025-12-03 18:25:16.748217455 +0000 UTC m=+0.055956441 container died f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:25:16 compute-0 systemd[1]: f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58-1ed3e4383e75385e.timer: Deactivated successfully.
Dec  3 18:25:16 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58.
Dec  3 18:25:16 compute-0 systemd[1]: f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58-1ed3e4383e75385e.service: Failed to open /run/systemd/transient/f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58-1ed3e4383e75385e.service: No such file or directory
Dec  3 18:25:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58-userdata-shm.mount: Deactivated successfully.
Dec  3 18:25:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f05fe21bac130252b072c55040b283de57713836ce9b724e01eec43e52bbf128-merged.mount: Deactivated successfully.
Dec  3 18:25:16 compute-0 podman[360870]: 2025-12-03 18:25:16.809842324 +0000 UTC m=+0.117581290 container cleanup f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:25:16 compute-0 python3[360822]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop node_exporter
Dec  3 18:25:16 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  3 18:25:16 compute-0 systemd[1]: f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58-1ed3e4383e75385e.timer: Failed to open /run/systemd/transient/f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58-1ed3e4383e75385e.timer: No such file or directory
Dec  3 18:25:16 compute-0 systemd[1]: f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58-1ed3e4383e75385e.service: Failed to open /run/systemd/transient/f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58-1ed3e4383e75385e.service: No such file or directory
Dec  3 18:25:16 compute-0 podman[360898]: 2025-12-03 18:25:16.91017695 +0000 UTC m=+0.073861119 container remove f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:25:16 compute-0 podman[360899]: Error: no container with ID f117b58969a20e4e7e0cc29a1a5a2fb708d40040632716b7b7e61374c3df8a58 found in database: no such container
Dec  3 18:25:16 compute-0 python3[360822]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force node_exporter
Dec  3 18:25:16 compute-0 systemd[1]: edpm_node_exporter.service: Control process exited, code=exited, status=125/n/a
Dec  3 18:25:16 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec  3 18:25:17 compute-0 podman[360922]: 2025-12-03 18:25:17.010848855 +0000 UTC m=+0.072373063 container create c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:25:17 compute-0 podman[360922]: 2025-12-03 18:25:16.968332203 +0000 UTC m=+0.029856401 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  3 18:25:17 compute-0 python3[360822]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec  3 18:25:17 compute-0 systemd[1]: edpm_node_exporter.service: Scheduled restart job, restart counter is at 1.
Dec  3 18:25:17 compute-0 systemd[1]: Stopped node_exporter container.
Dec  3 18:25:17 compute-0 systemd[1]: Starting node_exporter container...
Dec  3 18:25:17 compute-0 systemd[1]: Started libpod-conmon-c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9.scope.
Dec  3 18:25:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1e34e1b868879575e6aeb083082ec0191d0c1d4c41da58bc9937babc797e5e/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1e34e1b868879575e6aeb083082ec0191d0c1d4c41da58bc9937babc797e5e/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:17 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9.
Dec  3 18:25:17 compute-0 podman[360935]: 2025-12-03 18:25:17.288711947 +0000 UTC m=+0.240618022 container init c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.321Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.321Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.321Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.323Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.323Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.323Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.323Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=node_exporter.go:117 level=info collector=arp
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=node_exporter.go:117 level=info collector=bcache
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=node_exporter.go:117 level=info collector=bonding
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=node_exporter.go:117 level=info collector=cpu
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.324Z caller=node_exporter.go:117 level=info collector=edac
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=filefd
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=netclass
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=netdev
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=netstat
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=nfs
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=nvme
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=softnet
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=systemd
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=xfs
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.325Z caller=node_exporter.go:117 level=info collector=zfs
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.326Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  3 18:25:17 compute-0 node_exporter[360959]: ts=2025-12-03T18:25:17.326Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  3 18:25:17 compute-0 podman[360935]: 2025-12-03 18:25:17.331781721 +0000 UTC m=+0.283687776 container start c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:25:17 compute-0 python3[360822]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start node_exporter
Dec  3 18:25:17 compute-0 podman[360941]: node_exporter
Dec  3 18:25:17 compute-0 systemd[1]: Started node_exporter container.
Dec  3 18:25:17 compute-0 podman[360969]: 2025-12-03 18:25:17.464228434 +0000 UTC m=+0.112012173 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:25:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:18 compute-0 python3.9[361170]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:25:19 compute-0 python3.9[361324]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:25:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:21 compute-0 python3.9[361476]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764786320.3878036-562-124492238496211/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:25:21 compute-0 python3.9[361552]: ansible-systemd Invoked with state=started name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:25:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:25:23.312 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:25:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:25:23.313 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:25:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:25:23.313 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:25:23 compute-0 python3.9[361706]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 18:25:23 compute-0 systemd[1]: Stopping node_exporter container...
Dec  3 18:25:23 compute-0 systemd[1]: libpod-c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9.scope: Deactivated successfully.
Dec  3 18:25:23 compute-0 podman[361710]: 2025-12-03 18:25:23.586028108 +0000 UTC m=+0.062607144 container died c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:25:23 compute-0 systemd[1]: c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9-e11f1c07c1b34ad.timer: Deactivated successfully.
Dec  3 18:25:23 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9.
Dec  3 18:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9-userdata-shm.mount: Deactivated successfully.
Dec  3 18:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b1e34e1b868879575e6aeb083082ec0191d0c1d4c41da58bc9937babc797e5e-merged.mount: Deactivated successfully.
Dec  3 18:25:23 compute-0 podman[361710]: 2025-12-03 18:25:23.656606875 +0000 UTC m=+0.133185911 container cleanup c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:25:23 compute-0 podman[361710]: node_exporter
Dec  3 18:25:23 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  3 18:25:23 compute-0 systemd[1]: libpod-conmon-c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9.scope: Deactivated successfully.
Dec  3 18:25:23 compute-0 podman[361738]: node_exporter
Dec  3 18:25:23 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec  3 18:25:23 compute-0 systemd[1]: Stopped node_exporter container.
Dec  3 18:25:23 compute-0 systemd[1]: Starting node_exporter container...
Dec  3 18:25:23 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1e34e1b868879575e6aeb083082ec0191d0c1d4c41da58bc9937babc797e5e/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1e34e1b868879575e6aeb083082ec0191d0c1d4c41da58bc9937babc797e5e/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:23 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9.
Dec  3 18:25:23 compute-0 podman[361751]: 2025-12-03 18:25:23.912244384 +0000 UTC m=+0.141322021 container init c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.928Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.928Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.929Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.929Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.929Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.929Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.930Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.930Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.930Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.930Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.930Z caller=node_exporter.go:117 level=info collector=arp
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.930Z caller=node_exporter.go:117 level=info collector=bcache
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.930Z caller=node_exporter.go:117 level=info collector=bonding
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=cpu
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=edac
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=filefd
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=netclass
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=netdev
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=netstat
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=nfs
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=nvme
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=softnet
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=systemd
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=xfs
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=node_exporter.go:117 level=info collector=zfs
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.931Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  3 18:25:23 compute-0 node_exporter[361766]: ts=2025-12-03T18:25:23.932Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  3 18:25:23 compute-0 podman[361751]: 2025-12-03 18:25:23.942816282 +0000 UTC m=+0.171893939 container start c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:25:23 compute-0 podman[361751]: node_exporter
Dec  3 18:25:23 compute-0 systemd[1]: Started node_exporter container.
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:25:23 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:25:24 compute-0 podman[361776]: 2025-12-03 18:25:24.047292209 +0000 UTC m=+0.084726304 container health_status 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:25:24 compute-0 podman[361775]: 2025-12-03 18:25:24.049377501 +0000 UTC m=+0.095639863 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:25:24 compute-0 podman[361819]: 2025-12-03 18:25:24.149367968 +0000 UTC m=+0.103427093 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  3 18:25:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:24 compute-0 python3.9[361990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:25:25 compute-0 python3.9[362068]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/podman_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/podman_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:25:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:26 compute-0 python3.9[362220]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec  3 18:25:27 compute-0 python3.9[362372]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 18:25:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:28 compute-0 podman[362496]: 2025-12-03 18:25:28.593555334 +0000 UTC m=+0.103682079 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 18:25:28 compute-0 python3[362542]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 18:25:29 compute-0 python3[362542]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815",#012          "Digest": "sha256:7b7f37816f4a78244e32f90a517fdec0c458a6d3cd132212bb6bc16a9dc4fade",#012          "RepoTags": [#012               "quay.io/navidys/prometheus-podman-exporter:v1.10.1"#012          ],#012          "RepoDigests": [#012               "quay.io/navidys/prometheus-podman-exporter@sha256:7b7f37816f4a78244e32f90a517fdec0c458a6d3cd132212bb6bc16a9dc4fade",#012               "quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2024-03-17T01:45:00.251170784Z",#012          "Config": {#012               "User": "nobody",#012               "ExposedPorts": {#012                    "9882/tcp": {}#012               },#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"#012               ],#012               "Entrypoint": [#012                    "/bin/podman_exporter"#012               ],#012               "Labels": {#012                    "maintainer": "Navid Yaghoobi <navidys@fedoraproject.org>"#012               }#012          },#012          "Version": "",#012          "Author": "The Prometheus Authors <prometheus-developers@googlegroups.com>",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 33863535,#012          "VirtualSize": 33863535,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1/diff:/var/lib/containers/storage/overlay/1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed",#012                    "sha256:6b83872188a9e8912bee1d43add5e9bc518601b02a14a364c0da43b0d59acf33",#012                    "sha256:7a73cdcd46b4e3c3a632bae42ad152935f204b50dd02f0a46070f81446516318"#012               ]#012          },#012          "Labels": {#012               "maintainer": "Navid Yaghoobi <navidys@fedoraproject.org>"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "nobody",#012          "History": [#012               {#012                    "created": "2023-12-05T20:23:06.467739954Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:ee9bb8755ccbdd689b434d9b4ac7518e972699604ecda33e4ddc2a15d2831443 in / "#012               },#012               {#012                    "created": "2023-12-05T20:23:06.550971969Z",#012                    "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2023-12-15T10:54:58.99835989Z",#012                    "created_by": "MAINTAINER The Prometheus Authors <prometheus-developers@googlegroups.com>",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2023-12-15T10:54:58.99835989Z",#012                    "created_by": "COPY /rootfs / # buildkit",#012                    "comment": "buildkit.dockerfile.v0"#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "LABEL maintainer=Navid Yaghoobi <navidys@fedoraproject.org>",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "ARG TARGETPLATFORM",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "ARG TARGETOS",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "ARG TARGETARCH",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "COPY ./bin/remote/prometheus-podman-exporter-amd64 /bin/podman_exporter # buildkit",#012                    "comment": "buildkit.dockerfile.v0"#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "EXPOSE map[9882/tcp:{}]",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "USER nobody",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "ENTRYPOINT [\"/bin/podman_exporter\"]",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               }#012          ],#012          "NamesHistory": [#012               "quay.io/navidys/prometheus-podman-exporter:v1.10.1"#012          ]#012     }#012]#012: quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  3 18:25:29 compute-0 podman[158200]: @ - - [03/Dec/2025:17:57:25 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 3024199 "" "Go-http-client/1.1"
Dec  3 18:25:29 compute-0 systemd[1]: libpod-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope: Deactivated successfully.
Dec  3 18:25:29 compute-0 systemd[1]: libpod-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.scope: Consumed 3.468s CPU time.
Dec  3 18:25:29 compute-0 podman[362586]: 2025-12-03 18:25:29.365654706 +0000 UTC m=+0.069486393 container died 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:25:29 compute-0 systemd[1]: 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-4239d19ae82dfad8.timer: Deactivated successfully.
Dec  3 18:25:29 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658.
Dec  3 18:25:29 compute-0 systemd[1]: 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-4239d19ae82dfad8.service: Failed to open /run/systemd/transient/6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-4239d19ae82dfad8.service: No such file or directory
Dec  3 18:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-userdata-shm.mount: Deactivated successfully.
Dec  3 18:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-131c842645800969eb0853465c1a78b211e06c4c36238e3b540cce4379df0509-merged.mount: Deactivated successfully.
Dec  3 18:25:29 compute-0 podman[362586]: 2025-12-03 18:25:29.442206259 +0000 UTC m=+0.146037916 container cleanup 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:25:29 compute-0 python3[362542]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop podman_exporter
Dec  3 18:25:29 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  3 18:25:29 compute-0 systemd[1]: 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-4239d19ae82dfad8.timer: Failed to open /run/systemd/transient/6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-4239d19ae82dfad8.timer: No such file or directory
Dec  3 18:25:29 compute-0 systemd[1]: 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-4239d19ae82dfad8.service: Failed to open /run/systemd/transient/6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658-4239d19ae82dfad8.service: No such file or directory
Dec  3 18:25:29 compute-0 podman[362614]: 2025-12-03 18:25:29.56884753 +0000 UTC m=+0.096328490 container remove 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:25:29 compute-0 podman[362615]: Error: no container with ID 6e1c01fe8e4aba399d56d7e2514598cf742378e709ab7dbfa3e7503a56b26658 found in database: no such container
Dec  3 18:25:29 compute-0 python3[362542]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force podman_exporter
Dec  3 18:25:29 compute-0 systemd[1]: edpm_podman_exporter.service: Control process exited, code=exited, status=125/n/a
Dec  3 18:25:29 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec  3 18:25:29 compute-0 podman[362638]: 2025-12-03 18:25:29.712046245 +0000 UTC m=+0.098995825 container create dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:25:29 compute-0 podman[362638]: 2025-12-03 18:25:29.663888716 +0000 UTC m=+0.050838356 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  3 18:25:29 compute-0 python3[362542]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec  3 18:25:29 compute-0 systemd[1]: edpm_podman_exporter.service: Scheduled restart job, restart counter is at 1.
Dec  3 18:25:29 compute-0 systemd[1]: Stopped podman_exporter container.
Dec  3 18:25:29 compute-0 systemd[1]: Starting podman_exporter container...
Dec  3 18:25:29 compute-0 systemd[1]: Started libpod-conmon-dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29.scope.
Dec  3 18:25:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d3fec2834b329168b24cd28024aa134f5a313a548547c7d622b7c01838fe35/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d3fec2834b329168b24cd28024aa134f5a313a548547c7d622b7c01838fe35/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:29 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29.
Dec  3 18:25:29 compute-0 podman[362650]: 2025-12-03 18:25:29.982972128 +0000 UTC m=+0.232261727 container init dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:25:30 compute-0 podman_exporter[362674]: ts=2025-12-03T18:25:30.026Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  3 18:25:30 compute-0 podman_exporter[362674]: ts=2025-12-03T18:25:30.026Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  3 18:25:30 compute-0 podman_exporter[362674]: ts=2025-12-03T18:25:30.026Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  3 18:25:30 compute-0 podman_exporter[362674]: ts=2025-12-03T18:25:30.026Z caller=handler.go:105 level=info collector=container
Dec  3 18:25:30 compute-0 podman[362650]: 2025-12-03 18:25:30.030356777 +0000 UTC m=+0.279646306 container start dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:25:30 compute-0 podman[158200]: @ - - [03/Dec/2025:18:25:30 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  3 18:25:30 compute-0 podman[158200]: time="2025-12-03T18:25:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:25:30 compute-0 python3[362542]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start podman_exporter
Dec  3 18:25:30 compute-0 podman[362660]: podman_exporter
Dec  3 18:25:30 compute-0 systemd[1]: Started podman_exporter container.
Dec  3 18:25:30 compute-0 podman[158200]: @ - - [03/Dec/2025:18:25:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 43261 "" "Go-http-client/1.1"
Dec  3 18:25:30 compute-0 podman_exporter[362674]: ts=2025-12-03T18:25:30.147Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  3 18:25:30 compute-0 podman_exporter[362674]: ts=2025-12-03T18:25:30.149Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  3 18:25:30 compute-0 podman_exporter[362674]: ts=2025-12-03T18:25:30.149Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  3 18:25:30 compute-0 podman[362686]: 2025-12-03 18:25:30.152846296 +0000 UTC m=+0.103481814 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:25:30 compute-0 systemd[1]: dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29-22a3df70db6549b3.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 18:25:30 compute-0 systemd[1]: dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29-22a3df70db6549b3.service: Failed with result 'exit-code'.
Dec  3 18:25:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:31 compute-0 python3.9[362884]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:25:31 compute-0 openstack_network_exporter[160319]: ERROR   18:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:25:31 compute-0 openstack_network_exporter[160319]: ERROR   18:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:25:31 compute-0 openstack_network_exporter[160319]: ERROR   18:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:25:31 compute-0 openstack_network_exporter[160319]: ERROR   18:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:25:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:25:31 compute-0 openstack_network_exporter[160319]: ERROR   18:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:25:31 compute-0 openstack_network_exporter[160319]: 
Dec  3 18:25:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:32 compute-0 python3.9[363038]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:25:33 compute-0 python3.9[363189]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764786333.009285-640-10066168852807/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:25:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:35 compute-0 python3.9[363265]: ansible-systemd Invoked with state=started name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:25:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:36 compute-0 python3.9[363419]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 18:25:36 compute-0 systemd[1]: Stopping podman_exporter container...
Dec  3 18:25:36 compute-0 podman[158200]: @ - - [03/Dec/2025:18:25:30 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec  3 18:25:36 compute-0 systemd[1]: libpod-dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29.scope: Deactivated successfully.
Dec  3 18:25:36 compute-0 podman[363423]: 2025-12-03 18:25:36.940576372 +0000 UTC m=+0.089621765 container died dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:25:36 compute-0 systemd[1]: dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29-22a3df70db6549b3.timer: Deactivated successfully.
Dec  3 18:25:36 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29.
Dec  3 18:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29-userdata-shm.mount: Deactivated successfully.
Dec  3 18:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-36d3fec2834b329168b24cd28024aa134f5a313a548547c7d622b7c01838fe35-merged.mount: Deactivated successfully.
Dec  3 18:25:37 compute-0 podman[363423]: 2025-12-03 18:25:37.02588372 +0000 UTC m=+0.174929123 container cleanup dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:25:37 compute-0 podman[363423]: podman_exporter
Dec  3 18:25:37 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  3 18:25:37 compute-0 systemd[1]: libpod-conmon-dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29.scope: Deactivated successfully.
Dec  3 18:25:37 compute-0 podman[363450]: podman_exporter
Dec  3 18:25:37 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec  3 18:25:37 compute-0 systemd[1]: Stopped podman_exporter container.
Dec  3 18:25:37 compute-0 systemd[1]: Starting podman_exporter container...
Dec  3 18:25:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d3fec2834b329168b24cd28024aa134f5a313a548547c7d622b7c01838fe35/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d3fec2834b329168b24cd28024aa134f5a313a548547c7d622b7c01838fe35/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:37 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29.
Dec  3 18:25:37 compute-0 podman[363461]: 2025-12-03 18:25:37.360721767 +0000 UTC m=+0.187778058 container init dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:25:37 compute-0 podman_exporter[363475]: ts=2025-12-03T18:25:37.392Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  3 18:25:37 compute-0 podman_exporter[363475]: ts=2025-12-03T18:25:37.392Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  3 18:25:37 compute-0 podman_exporter[363475]: ts=2025-12-03T18:25:37.393Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  3 18:25:37 compute-0 podman_exporter[363475]: ts=2025-12-03T18:25:37.393Z caller=handler.go:105 level=info collector=container
Dec  3 18:25:37 compute-0 podman[363461]: 2025-12-03 18:25:37.39557859 +0000 UTC m=+0.222634881 container start dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:25:37 compute-0 podman[158200]: @ - - [03/Dec/2025:18:25:37 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  3 18:25:37 compute-0 podman[158200]: time="2025-12-03T18:25:37Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:25:37 compute-0 podman[363461]: podman_exporter
Dec  3 18:25:37 compute-0 systemd[1]: Started podman_exporter container.
Dec  3 18:25:37 compute-0 podman[158200]: @ - - [03/Dec/2025:18:25:37 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 43259 "" "Go-http-client/1.1"
Dec  3 18:25:37 compute-0 podman_exporter[363475]: ts=2025-12-03T18:25:37.467Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  3 18:25:37 compute-0 podman_exporter[363475]: ts=2025-12-03T18:25:37.467Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  3 18:25:37 compute-0 podman_exporter[363475]: ts=2025-12-03T18:25:37.468Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  3 18:25:37 compute-0 podman[363485]: 2025-12-03 18:25:37.538914329 +0000 UTC m=+0.118527793 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:25:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:38 compute-0 python3.9[363662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 18:25:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:39 compute-0 python3.9[363740]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/openstack_network_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 18:25:40 compute-0 python3.9[363892]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec  3 18:25:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:40 compute-0 podman[363993]: 2025-12-03 18:25:40.939331422 +0000 UTC m=+0.098064221 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 18:25:40 compute-0 podman[364002]: 2025-12-03 18:25:40.962013347 +0000 UTC m=+0.109953703 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, name=ubi9, distribution-scope=public, build-date=2024-09-18T21:23:30, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release-0.7.12=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container)
Dec  3 18:25:40 compute-0 podman[363999]: 2025-12-03 18:25:40.963920944 +0000 UTC m=+0.126226931 container health_status 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, version=9.6, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, vendor=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  3 18:25:40 compute-0 podman[364000]: 2025-12-03 18:25:40.983236197 +0000 UTC m=+0.125400131 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  3 18:25:41 compute-0 python3.9[364119]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 18:25:42 compute-0 podman[364243]: 2025-12-03 18:25:42.266655646 +0000 UTC m=+0.081852656 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  3 18:25:42 compute-0 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-774d997bb9b7e5b.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 18:25:42 compute-0 systemd[1]: ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad-774d997bb9b7e5b.service: Failed with result 'exit-code'.
Dec  3 18:25:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:42 compute-0 python3[364288]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 18:25:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:42 compute-0 python3[364288]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1",#012          "Digest": "sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7",#012          "RepoTags": [#012               "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-08-26T15:52:54.446618393Z",#012          "Config": {#012               "ExposedPorts": {#012                    "1981/tcp": {}#012               },#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "container=oci"#012               ],#012               "Cmd": [#012                    "/app/openstack-network-exporter"#012               ],#012               "WorkingDir": "/",#012               "Labels": {#012                    "architecture": "x86_64",#012                    "build-date": "2025-08-20T13:12:41",#012                    "com.redhat.component": "ubi9-minimal-container",#012                    "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",#012                    "description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012                    "distribution-scope": "public",#012                    "io.buildah.version": "1.33.7",#012                    "io.k8s.description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012                    "io.k8s.display-name": "Red Hat Universal Base Image 9 Minimal",#012                    "io.openshift.expose-services": "",#012                    "io.openshift.tags": "minimal rhel9",#012                    "maintainer": "Red Hat, Inc.",#012                    "name": "ubi9-minimal",#012                    "release": "1755695350",#012                    "summary": "Provides the latest release of the minimal Red Hat Universal Base Image 9.",#012                    "url": "https://catalog.redhat.com/en/search?searchType=containers",#012                    "vcs-ref": "f4b088292653bbf5ca8188a5e59ffd06a8671d4b",#012                    "vcs-type": "git",#012                    "vendor": "Red Hat, Inc.",#012                    "version": "9.6"#012               }#012          },#012          "Version": "",#012          "Author": "Red Hat",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 142088877,#012          "VirtualSize": 142088877,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/157961e3a1fe369d02893b19044a0e08e15689974ef810b235cb5ec194c7142c/diff:/var/lib/containers/storage/overlay/778d8c610941586099cac6c507cad2d1156b71b2bb54c42cebedf8808c68edb9/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/cd505d6f54e550fae708d1680b6b8d44753cf72fac8d36345974b92245bc660c/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/cd505d6f54e550fae708d1680b6b8d44753cf72fac8d36345974b92245bc660c/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:778d8c610941586099cac6c507cad2d1156b71b2bb54c42cebedf8808c68edb9",#012                    "sha256:60984b2898b5b4ad1680d36433001b7e2bebb1073775d06b4c2ff80f985caccb",#012                    "sha256:866ed9f0f685cc1d741f560227443a94926fc22494aa7808be751e7247cda421"#012               ]#012          },#012          "Labels": {#012               "architecture": "x86_64",#012               "build-date": "2025-08-20T13:12:41",#012               "com.redhat.component": "ubi9-minimal-container",#012               "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",#012               "description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012               "distribution-scope": "public",#012               "io.buildah.version": "1.33.7",#012               "io.k8s.description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012               "io.k8s.display-name": "Red Hat Universal Base Image 9 Minimal",#012               "io.openshift.expose-services": "",#012               "io.openshift.tags": "minimal rhel9",#012               "maintainer": "Red Hat, Inc.",#012               "name": "ubi9-minimal",#012               "release": "1755695350",#012               "summary": "Provides the latest release of the minimal Red Hat Universal Base Image 9.",#012               "url": "https://catalog.redhat.com/en/search?searchType=containers",#012               "vcs-ref": "f4b088292653bbf5ca8188a5e59ffd06a8671d4b",#012               "vcs-type": "git",#012               "vendor": "Red Hat, Inc.",#012               "version": "9.6"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "",#012          "History": [#012               {#012                    "created": "2025-08-20T13:14:24.836114247Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"Red Hat, Inc.\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-08-20T13:14:24.907067406Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL vendor=\"Red Hat, Inc.\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-08-20T13:14:24.953912498Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL url=\"https://catalog.redhat.com/en/search?searchType=containers\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-08-20T13:14:24.99202543Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL com.redhat.component=\"ubi9-minimal-container\"       name=\"ubi9-minimal\"       version=\"9.6\"       distribution-scope=\"public\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-08-20T13:14:25.033232759Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL com.redhat.license_terms=\"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-08-20T13:14:25.116880439Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL summary=\"Provides the latest release of the minimal Red Hat Universal Base Image 9.\"",#012                    "empty_layer": true#012               },#012               {#012      
Dec  3 18:25:42 compute-0 systemd[1]: libpod-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.scope: Deactivated successfully.
Dec  3 18:25:42 compute-0 systemd[1]: libpod-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.scope: Consumed 4.059s CPU time.
Dec  3 18:25:42 compute-0 podman[364336]: 2025-12-03 18:25:42.988219559 +0000 UTC m=+0.068737413 container died 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=)
Dec  3 18:25:43 compute-0 systemd[1]: 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5-65ac39abd31a63be.timer: Deactivated successfully.
Dec  3 18:25:43 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5.
Dec  3 18:25:43 compute-0 systemd[1]: 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5-65ac39abd31a63be.service: Failed to open /run/systemd/transient/9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5-65ac39abd31a63be.service: No such file or directory
Dec  3 18:25:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5-userdata-shm.mount: Deactivated successfully.
Dec  3 18:25:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-143da1461bff3dbbf0dde280109043d83cac235f94f368f80044c865e8914a7e-merged.mount: Deactivated successfully.
Dec  3 18:25:43 compute-0 podman[364336]: 2025-12-03 18:25:43.049409307 +0000 UTC m=+0.129927171 container cleanup 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7)
Dec  3 18:25:43 compute-0 python3[364288]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop openstack_network_exporter
Dec  3 18:25:43 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  3 18:25:43 compute-0 systemd[1]: 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5-65ac39abd31a63be.timer: Failed to open /run/systemd/transient/9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5-65ac39abd31a63be.timer: No such file or directory
Dec  3 18:25:43 compute-0 systemd[1]: 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5-65ac39abd31a63be.service: Failed to open /run/systemd/transient/9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5-65ac39abd31a63be.service: No such file or directory
Dec  3 18:25:43 compute-0 podman[364363]: 2025-12-03 18:25:43.164007363 +0000 UTC m=+0.078335569 container remove 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 18:25:43 compute-0 podman[364364]: Error: no container with ID 9189ea3bdee215942bfd52eb5f3a7c24ac2b0e9e213eac6b3294313c61e1eef5 found in database: no such container
Dec  3 18:25:43 compute-0 python3[364288]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force openstack_network_exporter
Dec  3 18:25:43 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Control process exited, code=exited, status=125/n/a
Dec  3 18:25:43 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec  3 18:25:43 compute-0 podman[364385]: 2025-12-03 18:25:43.268931942 +0000 UTC m=+0.081588059 container create d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Dec  3 18:25:43 compute-0 podman[364385]: 2025-12-03 18:25:43.219250075 +0000 UTC m=+0.031906252 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  3 18:25:43 compute-0 python3[364288]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  3 18:25:43 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Scheduled restart job, restart counter is at 1.
Dec  3 18:25:43 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Dec  3 18:25:43 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec  3 18:25:43 compute-0 systemd[1]: Started libpod-conmon-d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c.scope.
Dec  3 18:25:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8f0c06388d7561de605e366b2513db0ffb8da721223aa67c5a133b28368f99/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8f0c06388d7561de605e366b2513db0ffb8da721223aa67c5a133b28368f99/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8f0c06388d7561de605e366b2513db0ffb8da721223aa67c5a133b28368f99/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:43 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c.
Dec  3 18:25:43 compute-0 podman[364397]: 2025-12-03 18:25:43.550882344 +0000 UTC m=+0.252219886 container init d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: INFO    18:25:43 main.go:48: registering *bridge.Collector
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: INFO    18:25:43 main.go:48: registering *coverage.Collector
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: INFO    18:25:43 main.go:48: registering *datapath.Collector
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: INFO    18:25:43 main.go:48: registering *iface.Collector
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: INFO    18:25:43 main.go:48: registering *memory.Collector
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: INFO    18:25:43 main.go:48: registering *ovnnorthd.Collector
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: INFO    18:25:43 main.go:48: registering *ovn.Collector
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: INFO    18:25:43 main.go:48: registering *ovsdbserver.Collector
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: INFO    18:25:43 main.go:48: registering *pmd_perf.Collector
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: INFO    18:25:43 main.go:48: registering *pmd_rxq.Collector
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: INFO    18:25:43 main.go:48: registering *vswitch.Collector
Dec  3 18:25:43 compute-0 openstack_network_exporter[364422]: NOTICE  18:25:43 main.go:76: listening on https://:9105/metrics
Dec  3 18:25:43 compute-0 podman[364397]: 2025-12-03 18:25:43.597676109 +0000 UTC m=+0.299013621 container start d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 18:25:43 compute-0 podman[364403]: openstack_network_exporter
Dec  3 18:25:43 compute-0 python3[364288]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start openstack_network_exporter
Dec  3 18:25:43 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec  3 18:25:43 compute-0 podman[364432]: 2025-12-03 18:25:43.714816027 +0000 UTC m=+0.107904923 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git)
Dec  3 18:25:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:25:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:25:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:25:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:25:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:25:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:25:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:44 compute-0 python3.9[364627]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 18:25:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:46 compute-0 python3.9[364781]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:25:47 compute-0 python3.9[364932]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764786346.7715924-718-143110954845223/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:25:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:49 compute-0 python3.9[365008]: ansible-systemd Invoked with state=started name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 18:25:50 compute-0 python3.9[365163]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 18:25:50 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Dec  3 18:25:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:51 compute-0 systemd[1]: libpod-d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c.scope: Deactivated successfully.
Dec  3 18:25:51 compute-0 podman[365168]: 2025-12-03 18:25:51.253036514 +0000 UTC m=+0.942223577 container died d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Dec  3 18:25:51 compute-0 systemd[1]: d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c-3c09ee92232e90ad.timer: Deactivated successfully.
Dec  3 18:25:51 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c.
Dec  3 18:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c-userdata-shm.mount: Deactivated successfully.
Dec  3 18:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c8f0c06388d7561de605e366b2513db0ffb8da721223aa67c5a133b28368f99-merged.mount: Deactivated successfully.
Dec  3 18:25:51 compute-0 podman[365168]: 2025-12-03 18:25:51.37787552 +0000 UTC m=+1.067062543 container cleanup d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Dec  3 18:25:51 compute-0 podman[365168]: openstack_network_exporter
Dec  3 18:25:51 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  3 18:25:51 compute-0 systemd[1]: libpod-conmon-d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c.scope: Deactivated successfully.
Dec  3 18:25:51 compute-0 podman[365194]: openstack_network_exporter
Dec  3 18:25:51 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec  3 18:25:51 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Dec  3 18:25:51 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec  3 18:25:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8f0c06388d7561de605e366b2513db0ffb8da721223aa67c5a133b28368f99/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8f0c06388d7561de605e366b2513db0ffb8da721223aa67c5a133b28368f99/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8f0c06388d7561de605e366b2513db0ffb8da721223aa67c5a133b28368f99/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 18:25:51 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c.
Dec  3 18:25:51 compute-0 podman[365207]: 2025-12-03 18:25:51.745734786 +0000 UTC m=+0.238422498 container init d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: INFO    18:25:51 main.go:48: registering *bridge.Collector
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: INFO    18:25:51 main.go:48: registering *coverage.Collector
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: INFO    18:25:51 main.go:48: registering *datapath.Collector
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: INFO    18:25:51 main.go:48: registering *iface.Collector
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: INFO    18:25:51 main.go:48: registering *memory.Collector
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: INFO    18:25:51 main.go:48: registering *ovnnorthd.Collector
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: INFO    18:25:51 main.go:48: registering *ovn.Collector
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: INFO    18:25:51 main.go:48: registering *ovsdbserver.Collector
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: INFO    18:25:51 main.go:48: registering *pmd_perf.Collector
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: INFO    18:25:51 main.go:48: registering *pmd_rxq.Collector
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: INFO    18:25:51 main.go:48: registering *vswitch.Collector
Dec  3 18:25:51 compute-0 openstack_network_exporter[365222]: NOTICE  18:25:51 main.go:76: listening on https://:9105/metrics
Dec  3 18:25:51 compute-0 podman[365207]: 2025-12-03 18:25:51.798280601 +0000 UTC m=+0.290968293 container start d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, vcs-type=git, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Dec  3 18:25:51 compute-0 podman[365207]: openstack_network_exporter
Dec  3 18:25:51 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec  3 18:25:51 compute-0 podman[365232]: 2025-12-03 18:25:51.940714709 +0000 UTC m=+0.123704359 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.)
Dec  3 18:25:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:52 compute-0 python3.9[365402]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.786 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.787 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.810 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.811 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.811 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.823 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.824 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.825 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.825 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.826 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.826 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.826 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.827 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.828 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.857 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.858 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.858 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.858 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:25:53 compute-0 nova_compute[348325]: 2025-12-03 18:25:53.858 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:25:54 compute-0 python3.9[365557]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  3 18:25:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:25:54 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623774291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:25:54 compute-0 nova_compute[348325]: 2025-12-03 18:25:54.357 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:25:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:54 compute-0 nova_compute[348325]: 2025-12-03 18:25:54.863 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:25:54 compute-0 nova_compute[348325]: 2025-12-03 18:25:54.865 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4562MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:25:54 compute-0 nova_compute[348325]: 2025-12-03 18:25:54.865 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:25:54 compute-0 nova_compute[348325]: 2025-12-03 18:25:54.866 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:25:54 compute-0 podman[365659]: 2025-12-03 18:25:54.945720062 +0000 UTC m=+0.110363553 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:25:54 compute-0 nova_compute[348325]: 2025-12-03 18:25:54.952 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:25:54 compute-0 nova_compute[348325]: 2025-12-03 18:25:54.952 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:25:54 compute-0 podman[365655]: 2025-12-03 18:25:54.966946202 +0000 UTC m=+0.122082210 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec  3 18:25:54 compute-0 nova_compute[348325]: 2025-12-03 18:25:54.969 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:25:55 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:25:55 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/622171759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:25:55 compute-0 nova_compute[348325]: 2025-12-03 18:25:55.420 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:25:55 compute-0 nova_compute[348325]: 2025-12-03 18:25:55.430 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:25:55 compute-0 nova_compute[348325]: 2025-12-03 18:25:55.448 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:25:55 compute-0 nova_compute[348325]: 2025-12-03 18:25:55.450 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:25:55 compute-0 nova_compute[348325]: 2025-12-03 18:25:55.450 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:25:55 compute-0 python3.9[365807]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 18:25:55 compute-0 systemd[1]: Started libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope.
Dec  3 18:25:55 compute-0 podman[365810]: 2025-12-03 18:25:55.735527857 +0000 UTC m=+0.112497975 container exec 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  3 18:25:55 compute-0 podman[365810]: 2025-12-03 18:25:55.768476954 +0000 UTC m=+0.145447052 container exec_died 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 18:25:55 compute-0 systemd[1]: libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope: Deactivated successfully.
Dec  3 18:25:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:57 compute-0 python3.9[365993]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 18:25:57 compute-0 systemd[1]: Started libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope.
Dec  3 18:25:57 compute-0 podman[365994]: 2025-12-03 18:25:57.173113239 +0000 UTC m=+0.130333351 container exec 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:25:57 compute-0 podman[365994]: 2025-12-03 18:25:57.206021226 +0000 UTC m=+0.163241358 container exec_died 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  3 18:25:57 compute-0 systemd[1]: libpod-conmon-9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753.scope: Deactivated successfully.
Dec  3 18:25:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:25:58 compute-0 python3.9[366176]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:25:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:25:58 compute-0 podman[366202]: 2025-12-03 18:25:58.977883211 +0000 UTC m=+0.129072591 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:25:59 compute-0 python3.9[366348]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  3 18:25:59 compute-0 podman[158200]: time="2025-12-03T18:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:25:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 18:25:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8090 "" "Go-http-client/1.1"
Dec  3 18:26:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:26:01 compute-0 python3.9[366514]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 18:26:01 compute-0 openstack_network_exporter[365222]: ERROR   18:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:26:01 compute-0 openstack_network_exporter[365222]: ERROR   18:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:26:01 compute-0 openstack_network_exporter[365222]: ERROR   18:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:26:01 compute-0 openstack_network_exporter[365222]: ERROR   18:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:26:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:26:01 compute-0 openstack_network_exporter[365222]: ERROR   18:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:26:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:26:01 compute-0 systemd[1]: Started libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope.
Dec  3 18:26:01 compute-0 podman[366515]: 2025-12-03 18:26:01.499924881 +0000 UTC m=+0.277280138 container exec ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.vendor=CentOS)
Dec  3 18:26:01 compute-0 podman[366515]: 2025-12-03 18:26:01.534016937 +0000 UTC m=+0.311372114 container exec_died ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.schema-version=1.0)
Dec  3 18:26:01 compute-0 systemd[1]: libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope: Deactivated successfully.
Dec  3 18:26:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:26:02 compute-0 python3.9[366702]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 18:26:02 compute-0 systemd[1]: Started libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope.
Dec  3 18:26:02 compute-0 podman[366704]: 2025-12-03 18:26:02.651503083 +0000 UTC m=+0.103339161 container exec ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 18:26:02 compute-0 podman[366704]: 2025-12-03 18:26:02.684811168 +0000 UTC m=+0.136647266 container exec_died ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 18:26:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:26:02 compute-0 systemd[1]: libpod-conmon-ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad.scope: Deactivated successfully.
Dec  3 18:26:03 compute-0 python3.9[366883]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:26:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:26:04 compute-0 python3.9[367035]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  3 18:26:05 compute-0 python3.9[367200]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 18:26:05 compute-0 systemd[1]: Started libpod-conmon-c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9.scope.
Dec  3 18:26:05 compute-0 podman[367201]: 2025-12-03 18:26:05.873489588 +0000 UTC m=+0.118674886 container exec c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:26:05 compute-0 podman[367201]: 2025-12-03 18:26:05.90663503 +0000 UTC m=+0.151820308 container exec_died c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:26:05 compute-0 systemd[1]: libpod-conmon-c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9.scope: Deactivated successfully.
Dec  3 18:26:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:26:06 compute-0 python3.9[367381]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 18:26:06 compute-0 systemd[1]: Started libpod-conmon-c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9.scope.
Dec  3 18:26:06 compute-0 podman[367382]: 2025-12-03 18:26:06.984961168 +0000 UTC m=+0.116816532 container exec c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:26:07 compute-0 podman[367382]: 2025-12-03 18:26:07.017700479 +0000 UTC m=+0.149555823 container exec_died c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:26:07 compute-0 systemd[1]: libpod-conmon-c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9.scope: Deactivated successfully.
Dec  3 18:26:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:26:07 compute-0 podman[367535]: 2025-12-03 18:26:07.895007306 +0000 UTC m=+0.088414585 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:26:08 compute-0 python3.9[367586]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:26:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:26:09 compute-0 python3.9[367738]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  3 18:26:10 compute-0 python3.9[367901]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 18:26:10 compute-0 systemd[1]: Started libpod-conmon-dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29.scope.
Dec  3 18:26:10 compute-0 podman[367902]: 2025-12-03 18:26:10.410794793 +0000 UTC m=+0.099284352 container exec dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:26:10 compute-0 podman[367902]: 2025-12-03 18:26:10.442513259 +0000 UTC m=+0.131002818 container exec_died dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:26:10 compute-0 systemd[1]: libpod-conmon-dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29.scope: Deactivated successfully.
Dec  3 18:26:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Dec  3 18:26:11 compute-0 podman[368058]: 2025-12-03 18:26:11.92984345 +0000 UTC m=+0.091639595 container health_status ffbd969f0751bc755a1dad4a32222854c61f778a5a375acedf022743237e3c6c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_id=edpm, vendor=Red Hat, Inc., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 18:26:11 compute-0 podman[368056]: 2025-12-03 18:26:11.948386233 +0000 UTC m=+0.114194687 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:26:11 compute-0 podman[368057]: 2025-12-03 18:26:11.964823646 +0000 UTC m=+0.125106094 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:26:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Dec  3 18:26:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:26:12 compute-0 python3.9[368141]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 18:26:12 compute-0 systemd[1]: Started libpod-conmon-dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29.scope.
Dec  3 18:26:12 compute-0 podman[368145]: 2025-12-03 18:26:12.898703978 +0000 UTC m=+0.095297684 container exec dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:26:12 compute-0 podman[368151]: 2025-12-03 18:26:12.916075783 +0000 UTC m=+0.087374480 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 18:26:12 compute-0 podman[368145]: 2025-12-03 18:26:12.931330336 +0000 UTC m=+0.127924042 container exec_died dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:26:12 compute-0 systemd[1]: libpod-conmon-dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29.scope: Deactivated successfully.
Dec  3 18:26:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:26:13
Dec  3 18:26:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:26:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:26:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec  3 18:26:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:26:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:26:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:26:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:26:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:26:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:26:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:26:14 compute-0 python3.9[368347]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:26:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:26:15 compute-0 python3.9[368499]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  3 18:26:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:26:16 compute-0 python3.9[368737]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 18:26:16 compute-0 systemd[1]: Started libpod-conmon-d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c.scope.
Dec  3 18:26:16 compute-0 podman[368764]: 2025-12-03 18:26:16.674695067 +0000 UTC m=+0.146144480 container exec d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, architecture=x86_64, distribution-scope=public, vcs-type=git, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 18:26:16 compute-0 podman[368764]: 2025-12-03 18:26:16.711073988 +0000 UTC m=+0.182523381 container exec_died d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Dec  3 18:26:16 compute-0 systemd[1]: libpod-conmon-d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c.scope: Deactivated successfully.
Dec  3 18:26:17 compute-0 python3.9[368960]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 18:26:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:26:17 compute-0 systemd[1]: Started libpod-conmon-d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c.scope.
Dec  3 18:26:17 compute-0 podman[368965]: 2025-12-03 18:26:17.752620928 +0000 UTC m=+0.107077102 container exec d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., version=9.6, config_id=edpm, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal)
Dec  3 18:26:17 compute-0 podman[368965]: 2025-12-03 18:26:17.785126914 +0000 UTC m=+0.139583078 container exec_died d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 18:26:17 compute-0 systemd[1]: libpod-conmon-d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c.scope: Deactivated successfully.
Dec  3 18:26:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 18:26:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:26:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:26:17 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:26:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:26:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:26:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:26:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:26:17 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 446b72e0-909a-4162-801a-7d50fd6ed9a8 does not exist
Dec  3 18:26:17 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 48496e79-a384-4519-95d3-ec16ee582e85 does not exist
Dec  3 18:26:17 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4fe896a6-8027-474b-a38a-1d5a5513a601 does not exist
Dec  3 18:26:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:26:17 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:26:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:26:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:26:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:26:17 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:26:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:26:18 compute-0 podman[369297]: 2025-12-03 18:26:18.557589591 +0000 UTC m=+0.065976835 container create 119b1a7ba3a4ed101f64c070e2338d1d79b46423aeba8dfbdd454677704123cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec  3 18:26:18 compute-0 systemd[1]: Started libpod-conmon-119b1a7ba3a4ed101f64c070e2338d1d79b46423aeba8dfbdd454677704123cf.scope.
Dec  3 18:26:18 compute-0 podman[369297]: 2025-12-03 18:26:18.529368772 +0000 UTC m=+0.037756066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:26:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:26:18 compute-0 podman[369297]: 2025-12-03 18:26:18.653884796 +0000 UTC m=+0.162272050 container init 119b1a7ba3a4ed101f64c070e2338d1d79b46423aeba8dfbdd454677704123cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:26:18 compute-0 podman[369297]: 2025-12-03 18:26:18.674510567 +0000 UTC m=+0.182897801 container start 119b1a7ba3a4ed101f64c070e2338d1d79b46423aeba8dfbdd454677704123cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:26:18 compute-0 podman[369297]: 2025-12-03 18:26:18.679224623 +0000 UTC m=+0.187611857 container attach 119b1a7ba3a4ed101f64c070e2338d1d79b46423aeba8dfbdd454677704123cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:26:18 compute-0 cool_mestorf[369313]: 167 167
Dec  3 18:26:18 compute-0 systemd[1]: libpod-119b1a7ba3a4ed101f64c070e2338d1d79b46423aeba8dfbdd454677704123cf.scope: Deactivated successfully.
Dec  3 18:26:18 compute-0 podman[369297]: 2025-12-03 18:26:18.683061218 +0000 UTC m=+0.191448492 container died 119b1a7ba3a4ed101f64c070e2338d1d79b46423aeba8dfbdd454677704123cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:26:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:26:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:26:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:26:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:26:18 compute-0 python3.9[369296]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 18:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0035a719275bfc599dc558798dbbc41f3b5261029df2b35ab62eeff2273a86f5-merged.mount: Deactivated successfully.
Dec  3 18:26:18 compute-0 podman[369297]: 2025-12-03 18:26:18.751698758 +0000 UTC m=+0.260086002 container remove 119b1a7ba3a4ed101f64c070e2338d1d79b46423aeba8dfbdd454677704123cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:26:18 compute-0 systemd[1]: libpod-conmon-119b1a7ba3a4ed101f64c070e2338d1d79b46423aeba8dfbdd454677704123cf.scope: Deactivated successfully.
Dec  3 18:26:18 compute-0 podman[369359]: 2025-12-03 18:26:18.961693208 +0000 UTC m=+0.069411930 container create 3b21e0bccfee2c99ce5559552bcd4f9a04f891b9ad62ded19edfb078fce56bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_gauss, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:26:19 compute-0 systemd[1]: Started libpod-conmon-3b21e0bccfee2c99ce5559552bcd4f9a04f891b9ad62ded19edfb078fce56bde.scope.
Dec  3 18:26:19 compute-0 podman[369359]: 2025-12-03 18:26:18.931346886 +0000 UTC m=+0.039065548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:26:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0ab9b6dd151d75dbc7037cfc1725b2723e729c078caf6dc4b1e13ead0c2c55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0ab9b6dd151d75dbc7037cfc1725b2723e729c078caf6dc4b1e13ead0c2c55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0ab9b6dd151d75dbc7037cfc1725b2723e729c078caf6dc4b1e13ead0c2c55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0ab9b6dd151d75dbc7037cfc1725b2723e729c078caf6dc4b1e13ead0c2c55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:02 compute-0 rsyslogd[188590]: imjournal: 3794 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  3 18:31:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:31:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:31:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:31:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:31:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:31:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:31:03 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4fdfba6b-8ce1-4627-8eb6-043c68fc8e59 does not exist
Dec  3 18:31:03 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9d1f79f7-e095-4445-9abe-7e9eedf8bc2a does not exist
Dec  3 18:31:03 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 577149ca-fba9-48c3-a27d-5c528efed40c does not exist
Dec  3 18:31:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:31:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:31:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:31:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:31:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:31:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:31:03 compute-0 podman[400630]: 2025-12-03 18:31:03.503077472 +0000 UTC m=+0.119946864 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:31:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:31:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:31:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:31:04 compute-0 podman[400762]: 2025-12-03 18:31:04.254127145 +0000 UTC m=+0.087908805 container create 7f70183a35d15db74ee83bcc9dcfe85439af2a0b337a0e53176c18c61d802722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:31:04 compute-0 podman[400762]: 2025-12-03 18:31:04.233814069 +0000 UTC m=+0.067595769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:31:04 compute-0 systemd[1]: Started libpod-conmon-7f70183a35d15db74ee83bcc9dcfe85439af2a0b337a0e53176c18c61d802722.scope.
Dec  3 18:31:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:31:04 compute-0 podman[400762]: 2025-12-03 18:31:04.392685553 +0000 UTC m=+0.226467233 container init 7f70183a35d15db74ee83bcc9dcfe85439af2a0b337a0e53176c18c61d802722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:31:04 compute-0 podman[400762]: 2025-12-03 18:31:04.40160918 +0000 UTC m=+0.235390840 container start 7f70183a35d15db74ee83bcc9dcfe85439af2a0b337a0e53176c18c61d802722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_johnson, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:31:04 compute-0 podman[400762]: 2025-12-03 18:31:04.405891304 +0000 UTC m=+0.239672964 container attach 7f70183a35d15db74ee83bcc9dcfe85439af2a0b337a0e53176c18c61d802722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_johnson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 18:31:04 compute-0 clever_johnson[400778]: 167 167
Dec  3 18:31:04 compute-0 podman[400762]: 2025-12-03 18:31:04.413864329 +0000 UTC m=+0.247645989 container died 7f70183a35d15db74ee83bcc9dcfe85439af2a0b337a0e53176c18c61d802722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_johnson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 18:31:04 compute-0 systemd[1]: libpod-7f70183a35d15db74ee83bcc9dcfe85439af2a0b337a0e53176c18c61d802722.scope: Deactivated successfully.
Dec  3 18:31:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc86012ed293218dc6cd4be7bf99894aff51d44b14f1d4dbeeddbf8bd9681c2e-merged.mount: Deactivated successfully.
Dec  3 18:31:04 compute-0 podman[400762]: 2025-12-03 18:31:04.493754207 +0000 UTC m=+0.327535907 container remove 7f70183a35d15db74ee83bcc9dcfe85439af2a0b337a0e53176c18c61d802722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:31:04 compute-0 systemd[1]: libpod-conmon-7f70183a35d15db74ee83bcc9dcfe85439af2a0b337a0e53176c18c61d802722.scope: Deactivated successfully.
Dec  3 18:31:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:04 compute-0 podman[400801]: 2025-12-03 18:31:04.73260034 +0000 UTC m=+0.068724427 container create 976901d46958cb6bc49356dee3c8df437635912e2ccf6538a2e8b706773ea305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:31:04 compute-0 systemd[1]: Started libpod-conmon-976901d46958cb6bc49356dee3c8df437635912e2ccf6538a2e8b706773ea305.scope.
Dec  3 18:31:04 compute-0 podman[400801]: 2025-12-03 18:31:04.703978752 +0000 UTC m=+0.040102869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:31:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c13f87c4328a10e3e02d26d8981ef7d558ef046e3a3e2f8b0a2298ae87bbc01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c13f87c4328a10e3e02d26d8981ef7d558ef046e3a3e2f8b0a2298ae87bbc01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c13f87c4328a10e3e02d26d8981ef7d558ef046e3a3e2f8b0a2298ae87bbc01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c13f87c4328a10e3e02d26d8981ef7d558ef046e3a3e2f8b0a2298ae87bbc01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c13f87c4328a10e3e02d26d8981ef7d558ef046e3a3e2f8b0a2298ae87bbc01/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:04 compute-0 podman[400801]: 2025-12-03 18:31:04.854632325 +0000 UTC m=+0.190756462 container init 976901d46958cb6bc49356dee3c8df437635912e2ccf6538a2e8b706773ea305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:31:04 compute-0 podman[400801]: 2025-12-03 18:31:04.874166991 +0000 UTC m=+0.210291048 container start 976901d46958cb6bc49356dee3c8df437635912e2ccf6538a2e8b706773ea305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  3 18:31:04 compute-0 podman[400801]: 2025-12-03 18:31:04.882235748 +0000 UTC m=+0.218359895 container attach 976901d46958cb6bc49356dee3c8df437635912e2ccf6538a2e8b706773ea305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:31:06 compute-0 hopeful_chatelet[400817]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:31:06 compute-0 hopeful_chatelet[400817]: --> relative data size: 1.0
Dec  3 18:31:06 compute-0 hopeful_chatelet[400817]: --> All data devices are unavailable
Dec  3 18:31:06 compute-0 systemd[1]: libpod-976901d46958cb6bc49356dee3c8df437635912e2ccf6538a2e8b706773ea305.scope: Deactivated successfully.
Dec  3 18:31:06 compute-0 systemd[1]: libpod-976901d46958cb6bc49356dee3c8df437635912e2ccf6538a2e8b706773ea305.scope: Consumed 1.324s CPU time.
Dec  3 18:31:06 compute-0 podman[400801]: 2025-12-03 18:31:06.257095959 +0000 UTC m=+1.593220046 container died 976901d46958cb6bc49356dee3c8df437635912e2ccf6538a2e8b706773ea305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:31:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c13f87c4328a10e3e02d26d8981ef7d558ef046e3a3e2f8b0a2298ae87bbc01-merged.mount: Deactivated successfully.
Dec  3 18:31:06 compute-0 podman[400801]: 2025-12-03 18:31:06.349248756 +0000 UTC m=+1.685372813 container remove 976901d46958cb6bc49356dee3c8df437635912e2ccf6538a2e8b706773ea305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:31:06 compute-0 systemd[1]: libpod-conmon-976901d46958cb6bc49356dee3c8df437635912e2ccf6538a2e8b706773ea305.scope: Deactivated successfully.
Dec  3 18:31:06 compute-0 podman[400850]: 2025-12-03 18:31:06.422216355 +0000 UTC m=+0.122457416 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Dec  3 18:31:06 compute-0 podman[400847]: 2025-12-03 18:31:06.428780215 +0000 UTC m=+0.134544691 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1214.1726694543, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm)
Dec  3 18:31:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:07 compute-0 podman[401037]: 2025-12-03 18:31:07.34815165 +0000 UTC m=+0.082273456 container create ccfab274b6e8676c2d842380bbc4e2e9eee9561c54f184244d9d9c2ea77c315e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:31:07 compute-0 systemd[1]: Started libpod-conmon-ccfab274b6e8676c2d842380bbc4e2e9eee9561c54f184244d9d9c2ea77c315e.scope.
Dec  3 18:31:07 compute-0 podman[401037]: 2025-12-03 18:31:07.317352009 +0000 UTC m=+0.051473865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:31:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:31:07 compute-0 podman[401037]: 2025-12-03 18:31:07.458379448 +0000 UTC m=+0.192501294 container init ccfab274b6e8676c2d842380bbc4e2e9eee9561c54f184244d9d9c2ea77c315e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:31:07 compute-0 podman[401037]: 2025-12-03 18:31:07.466838194 +0000 UTC m=+0.200959970 container start ccfab274b6e8676c2d842380bbc4e2e9eee9561c54f184244d9d9c2ea77c315e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:31:07 compute-0 podman[401037]: 2025-12-03 18:31:07.471857207 +0000 UTC m=+0.205979003 container attach ccfab274b6e8676c2d842380bbc4e2e9eee9561c54f184244d9d9c2ea77c315e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:31:07 compute-0 amazing_wilbur[401052]: 167 167
Dec  3 18:31:07 compute-0 systemd[1]: libpod-ccfab274b6e8676c2d842380bbc4e2e9eee9561c54f184244d9d9c2ea77c315e.scope: Deactivated successfully.
Dec  3 18:31:07 compute-0 podman[401037]: 2025-12-03 18:31:07.475556006 +0000 UTC m=+0.209677812 container died ccfab274b6e8676c2d842380bbc4e2e9eee9561c54f184244d9d9c2ea77c315e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:31:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-11b0cfb42e3a4ca0634988a51c5e8fa241d09e27a8069005e209fe5fbb4b5bf0-merged.mount: Deactivated successfully.
Dec  3 18:31:07 compute-0 podman[401037]: 2025-12-03 18:31:07.544303263 +0000 UTC m=+0.278425069 container remove ccfab274b6e8676c2d842380bbc4e2e9eee9561c54f184244d9d9c2ea77c315e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 18:31:07 compute-0 systemd[1]: libpod-conmon-ccfab274b6e8676c2d842380bbc4e2e9eee9561c54f184244d9d9c2ea77c315e.scope: Deactivated successfully.
Dec  3 18:31:07 compute-0 podman[401075]: 2025-12-03 18:31:07.821434069 +0000 UTC m=+0.092264669 container create ccd28b345481c5d6f079963a5acc4937a2467ec5822befb546fb2eb6273f4b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:31:07 compute-0 podman[401075]: 2025-12-03 18:31:07.785928584 +0000 UTC m=+0.056759204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:31:07 compute-0 systemd[1]: Started libpod-conmon-ccd28b345481c5d6f079963a5acc4937a2467ec5822befb546fb2eb6273f4b04.scope.
Dec  3 18:31:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1840693db477c2878fef9499caac6d69444ce4b1cc304a249cae38ade34e386/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1840693db477c2878fef9499caac6d69444ce4b1cc304a249cae38ade34e386/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1840693db477c2878fef9499caac6d69444ce4b1cc304a249cae38ade34e386/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1840693db477c2878fef9499caac6d69444ce4b1cc304a249cae38ade34e386/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:08 compute-0 podman[401075]: 2025-12-03 18:31:08.044996471 +0000 UTC m=+0.315827081 container init ccd28b345481c5d6f079963a5acc4937a2467ec5822befb546fb2eb6273f4b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 18:31:08 compute-0 podman[401075]: 2025-12-03 18:31:08.063165163 +0000 UTC m=+0.333995763 container start ccd28b345481c5d6f079963a5acc4937a2467ec5822befb546fb2eb6273f4b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 18:31:08 compute-0 podman[401075]: 2025-12-03 18:31:08.069043777 +0000 UTC m=+0.339874347 container attach ccd28b345481c5d6f079963a5acc4937a2467ec5822befb546fb2eb6273f4b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:31:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:08 compute-0 tender_booth[401091]: {
Dec  3 18:31:08 compute-0 tender_booth[401091]:    "0": [
Dec  3 18:31:08 compute-0 tender_booth[401091]:        {
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "devices": [
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "/dev/loop3"
Dec  3 18:31:08 compute-0 tender_booth[401091]:            ],
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_name": "ceph_lv0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_size": "21470642176",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "name": "ceph_lv0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "tags": {
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.cluster_name": "ceph",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.crush_device_class": "",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.encrypted": "0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.osd_id": "0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.type": "block",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.vdo": "0"
Dec  3 18:31:08 compute-0 tender_booth[401091]:            },
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "type": "block",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "vg_name": "ceph_vg0"
Dec  3 18:31:08 compute-0 tender_booth[401091]:        }
Dec  3 18:31:08 compute-0 tender_booth[401091]:    ],
Dec  3 18:31:08 compute-0 tender_booth[401091]:    "1": [
Dec  3 18:31:08 compute-0 tender_booth[401091]:        {
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "devices": [
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "/dev/loop4"
Dec  3 18:31:08 compute-0 tender_booth[401091]:            ],
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_name": "ceph_lv1",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_size": "21470642176",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "name": "ceph_lv1",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "tags": {
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.cluster_name": "ceph",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.crush_device_class": "",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.encrypted": "0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.osd_id": "1",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.type": "block",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.vdo": "0"
Dec  3 18:31:08 compute-0 tender_booth[401091]:            },
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "type": "block",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "vg_name": "ceph_vg1"
Dec  3 18:31:08 compute-0 tender_booth[401091]:        }
Dec  3 18:31:08 compute-0 tender_booth[401091]:    ],
Dec  3 18:31:08 compute-0 tender_booth[401091]:    "2": [
Dec  3 18:31:08 compute-0 tender_booth[401091]:        {
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "devices": [
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "/dev/loop5"
Dec  3 18:31:08 compute-0 tender_booth[401091]:            ],
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_name": "ceph_lv2",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_size": "21470642176",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "name": "ceph_lv2",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "tags": {
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.cluster_name": "ceph",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.crush_device_class": "",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.encrypted": "0",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.osd_id": "2",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.type": "block",
Dec  3 18:31:08 compute-0 tender_booth[401091]:                "ceph.vdo": "0"
Dec  3 18:31:08 compute-0 tender_booth[401091]:            },
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "type": "block",
Dec  3 18:31:08 compute-0 tender_booth[401091]:            "vg_name": "ceph_vg2"
Dec  3 18:31:08 compute-0 tender_booth[401091]:        }
Dec  3 18:31:08 compute-0 tender_booth[401091]:    ]
Dec  3 18:31:08 compute-0 tender_booth[401091]: }
Dec  3 18:31:08 compute-0 systemd[1]: libpod-ccd28b345481c5d6f079963a5acc4937a2467ec5822befb546fb2eb6273f4b04.scope: Deactivated successfully.
Dec  3 18:31:08 compute-0 podman[401075]: 2025-12-03 18:31:08.874514816 +0000 UTC m=+1.145345406 container died ccd28b345481c5d6f079963a5acc4937a2467ec5822befb546fb2eb6273f4b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:31:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1840693db477c2878fef9499caac6d69444ce4b1cc304a249cae38ade34e386-merged.mount: Deactivated successfully.
Dec  3 18:31:08 compute-0 podman[401075]: 2025-12-03 18:31:08.961896916 +0000 UTC m=+1.232727486 container remove ccd28b345481c5d6f079963a5acc4937a2467ec5822befb546fb2eb6273f4b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:31:08 compute-0 systemd[1]: libpod-conmon-ccd28b345481c5d6f079963a5acc4937a2467ec5822befb546fb2eb6273f4b04.scope: Deactivated successfully.
Dec  3 18:31:10 compute-0 podman[401251]: 2025-12-03 18:31:10.057902468 +0000 UTC m=+0.070164411 container create bb74ee12b2648523b2d1facc10f82d0fceb06818f1fa068cd2c51d8f40f31ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_curran, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:31:10 compute-0 systemd[1]: Started libpod-conmon-bb74ee12b2648523b2d1facc10f82d0fceb06818f1fa068cd2c51d8f40f31ef5.scope.
Dec  3 18:31:10 compute-0 podman[401251]: 2025-12-03 18:31:10.029047924 +0000 UTC m=+0.041309927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:31:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:31:10 compute-0 podman[401251]: 2025-12-03 18:31:10.188619305 +0000 UTC m=+0.200881218 container init bb74ee12b2648523b2d1facc10f82d0fceb06818f1fa068cd2c51d8f40f31ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:31:10 compute-0 podman[401251]: 2025-12-03 18:31:10.199277525 +0000 UTC m=+0.211539458 container start bb74ee12b2648523b2d1facc10f82d0fceb06818f1fa068cd2c51d8f40f31ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_curran, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 18:31:10 compute-0 unruffled_curran[401267]: 167 167
Dec  3 18:31:10 compute-0 systemd[1]: libpod-bb74ee12b2648523b2d1facc10f82d0fceb06818f1fa068cd2c51d8f40f31ef5.scope: Deactivated successfully.
Dec  3 18:31:10 compute-0 conmon[401267]: conmon bb74ee12b2648523b2d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb74ee12b2648523b2d1facc10f82d0fceb06818f1fa068cd2c51d8f40f31ef5.scope/container/memory.events
Dec  3 18:31:10 compute-0 podman[401251]: 2025-12-03 18:31:10.214553258 +0000 UTC m=+0.226815331 container attach bb74ee12b2648523b2d1facc10f82d0fceb06818f1fa068cd2c51d8f40f31ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:31:10 compute-0 podman[401251]: 2025-12-03 18:31:10.215189513 +0000 UTC m=+0.227451466 container died bb74ee12b2648523b2d1facc10f82d0fceb06818f1fa068cd2c51d8f40f31ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_curran, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 18:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b2430b52d28feed060f9fb5b2e50f43b3bdb1f42e7e403e6bd346e9362977c8-merged.mount: Deactivated successfully.
Dec  3 18:31:10 compute-0 podman[401251]: 2025-12-03 18:31:10.286748207 +0000 UTC m=+0.299010120 container remove bb74ee12b2648523b2d1facc10f82d0fceb06818f1fa068cd2c51d8f40f31ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_curran, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:31:10 compute-0 systemd[1]: libpod-conmon-bb74ee12b2648523b2d1facc10f82d0fceb06818f1fa068cd2c51d8f40f31ef5.scope: Deactivated successfully.
Dec  3 18:31:10 compute-0 podman[401289]: 2025-12-03 18:31:10.530131301 +0000 UTC m=+0.079037887 container create aad04c2790de34eb2bfd051e02c1d787bcfcf850e1f915802202aa10690086b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 18:31:10 compute-0 podman[401289]: 2025-12-03 18:31:10.495017695 +0000 UTC m=+0.043924291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:31:10 compute-0 systemd[1]: Started libpod-conmon-aad04c2790de34eb2bfd051e02c1d787bcfcf850e1f915802202aa10690086b4.scope.
Dec  3 18:31:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62a2e7080e089e69390e0d5c44d5e151c86dc5067e962e07728ecb77da02856/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62a2e7080e089e69390e0d5c44d5e151c86dc5067e962e07728ecb77da02856/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62a2e7080e089e69390e0d5c44d5e151c86dc5067e962e07728ecb77da02856/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62a2e7080e089e69390e0d5c44d5e151c86dc5067e962e07728ecb77da02856/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:31:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:10 compute-0 podman[401289]: 2025-12-03 18:31:10.693814202 +0000 UTC m=+0.242720838 container init aad04c2790de34eb2bfd051e02c1d787bcfcf850e1f915802202aa10690086b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:31:10 compute-0 podman[401289]: 2025-12-03 18:31:10.72692247 +0000 UTC m=+0.275829046 container start aad04c2790de34eb2bfd051e02c1d787bcfcf850e1f915802202aa10690086b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:31:10 compute-0 podman[401289]: 2025-12-03 18:31:10.733339026 +0000 UTC m=+0.282245622 container attach aad04c2790de34eb2bfd051e02c1d787bcfcf850e1f915802202aa10690086b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:31:11 compute-0 vigilant_curran[401305]: {
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "osd_id": 1,
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "type": "bluestore"
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:    },
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "osd_id": 2,
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "type": "bluestore"
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:    },
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "osd_id": 0,
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:        "type": "bluestore"
Dec  3 18:31:11 compute-0 vigilant_curran[401305]:    }
Dec  3 18:31:11 compute-0 vigilant_curran[401305]: }
Dec  3 18:31:11 compute-0 systemd[1]: libpod-aad04c2790de34eb2bfd051e02c1d787bcfcf850e1f915802202aa10690086b4.scope: Deactivated successfully.
Dec  3 18:31:11 compute-0 systemd[1]: libpod-aad04c2790de34eb2bfd051e02c1d787bcfcf850e1f915802202aa10690086b4.scope: Consumed 1.216s CPU time.
Dec  3 18:31:11 compute-0 podman[401289]: 2025-12-03 18:31:11.933069927 +0000 UTC m=+1.481976493 container died aad04c2790de34eb2bfd051e02c1d787bcfcf850e1f915802202aa10690086b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f62a2e7080e089e69390e0d5c44d5e151c86dc5067e962e07728ecb77da02856-merged.mount: Deactivated successfully.
Dec  3 18:31:11 compute-0 podman[401333]: 2025-12-03 18:31:11.977844139 +0000 UTC m=+0.136874249 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:31:12 compute-0 podman[401289]: 2025-12-03 18:31:12.025878239 +0000 UTC m=+1.574784815 container remove aad04c2790de34eb2bfd051e02c1d787bcfcf850e1f915802202aa10690086b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:31:12 compute-0 systemd[1]: libpod-conmon-aad04c2790de34eb2bfd051e02c1d787bcfcf850e1f915802202aa10690086b4.scope: Deactivated successfully.
Dec  3 18:31:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:31:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:31:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:31:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:31:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c9ebc92a-86e4-4f2b-a10a-c80f1aff2780 does not exist
Dec  3 18:31:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9d430c53-eb0a-4ea4-8731-d27be97a7d92 does not exist
Dec  3 18:31:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:31:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:31:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.244 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.245 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.245 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.246 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.247 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.248 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.248 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.248 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.248 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.248 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.250 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.254 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.254 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.255 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.255 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.256 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.256 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.257 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.257 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.259 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.259 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.260 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.260 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.260 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.261 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.261 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.261 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.262 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.262 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.263 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.263 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.263 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.264 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:31:13.269 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:31:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:31:13
Dec  3 18:31:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:31:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:31:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'vms']
Dec  3 18:31:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:31:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:31:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:31:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:31:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:31:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:31:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:31:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:31:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:31:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:31:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:31:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:31:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:31:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:31:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:31:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:31:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:31:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:17 compute-0 podman[401424]: 2025-12-03 18:31:17.906379035 +0000 UTC m=+0.076827634 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:31:17 compute-0 podman[401423]: 2025-12-03 18:31:17.994504304 +0000 UTC m=+0.163680662 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:31:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:31:23.320 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:31:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:31:23.320 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:31:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:31:23.320 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:31:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec  3 18:31:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3915721019' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec  3 18:31:26 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14365 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  3 18:31:26 compute-0 ceph-mgr[193091]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 18:31:26 compute-0 ceph-mgr[193091]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 18:31:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:29 compute-0 podman[158200]: time="2025-12-03T18:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:31:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:31:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8113 "" "Go-http-client/1.1"
Dec  3 18:31:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:30 compute-0 podman[401471]: 2025-12-03 18:31:30.91031127 +0000 UTC m=+0.078804372 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 18:31:30 compute-0 podman[401473]: 2025-12-03 18:31:30.931695121 +0000 UTC m=+0.092883795 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Dec  3 18:31:30 compute-0 podman[401472]: 2025-12-03 18:31:30.954034126 +0000 UTC m=+0.113941489 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:31:31 compute-0 openstack_network_exporter[365222]: ERROR   18:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:31:31 compute-0 openstack_network_exporter[365222]: ERROR   18:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:31:31 compute-0 openstack_network_exporter[365222]: ERROR   18:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:31:31 compute-0 openstack_network_exporter[365222]: ERROR   18:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:31:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:31:31 compute-0 openstack_network_exporter[365222]: ERROR   18:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:31:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:31:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:33 compute-0 podman[401535]: 2025-12-03 18:31:33.940561231 +0000 UTC m=+0.098412770 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 18:31:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:36 compute-0 podman[401553]: 2025-12-03 18:31:36.914795126 +0000 UTC m=+0.075254075 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec  3 18:31:36 compute-0 podman[401552]: 2025-12-03 18:31:36.923494209 +0000 UTC m=+0.084440560 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release-0.7.12=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, version=9.4)
Dec  3 18:31:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec  3 18:31:41 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930240334' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec  3 18:31:41 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.14371 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  3 18:31:41 compute-0 ceph-mgr[193091]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 18:31:41 compute-0 ceph-mgr[193091]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 18:31:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:42 compute-0 podman[401589]: 2025-12-03 18:31:42.94666837 +0000 UTC m=+0.112817971 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:31:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:31:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:31:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:31:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:31:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:31:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:31:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:48 compute-0 podman[401613]: 2025-12-03 18:31:48.923874191 +0000 UTC m=+0.085993667 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 18:31:48 compute-0 podman[401612]: 2025-12-03 18:31:48.981853845 +0000 UTC m=+0.141263315 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 18:31:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.353 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.354 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.355 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.355 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.472 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.472 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.473 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.474 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.474 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.474 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.475 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.475 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.519 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.520 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.520 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.520 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:31:58 compute-0 nova_compute[348325]: 2025-12-03 18:31:58.521 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:31:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:31:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:31:59 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489515276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:31:59 compute-0 nova_compute[348325]: 2025-12-03 18:31:59.050 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:31:59 compute-0 nova_compute[348325]: 2025-12-03 18:31:59.557 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:31:59 compute-0 nova_compute[348325]: 2025-12-03 18:31:59.559 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4603MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:31:59 compute-0 nova_compute[348325]: 2025-12-03 18:31:59.559 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:31:59 compute-0 nova_compute[348325]: 2025-12-03 18:31:59.560 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:31:59 compute-0 nova_compute[348325]: 2025-12-03 18:31:59.637 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:31:59 compute-0 nova_compute[348325]: 2025-12-03 18:31:59.638 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:31:59 compute-0 nova_compute[348325]: 2025-12-03 18:31:59.660 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:31:59 compute-0 podman[158200]: time="2025-12-03T18:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:31:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:31:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8114 "" "Go-http-client/1.1"
Dec  3 18:32:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:32:00 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228238383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:32:00 compute-0 nova_compute[348325]: 2025-12-03 18:32:00.152 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:32:00 compute-0 nova_compute[348325]: 2025-12-03 18:32:00.165 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:32:00 compute-0 nova_compute[348325]: 2025-12-03 18:32:00.180 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:32:00 compute-0 nova_compute[348325]: 2025-12-03 18:32:00.182 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:32:00 compute-0 nova_compute[348325]: 2025-12-03 18:32:00.183 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:32:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:01 compute-0 nova_compute[348325]: 2025-12-03 18:32:01.174 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:32:01 compute-0 openstack_network_exporter[365222]: ERROR   18:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:32:01 compute-0 openstack_network_exporter[365222]: ERROR   18:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:32:01 compute-0 openstack_network_exporter[365222]: ERROR   18:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:32:01 compute-0 openstack_network_exporter[365222]: ERROR   18:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:32:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:32:01 compute-0 openstack_network_exporter[365222]: ERROR   18:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:32:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:32:01 compute-0 podman[401702]: 2025-12-03 18:32:01.954608789 +0000 UTC m=+0.100931852 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  3 18:32:01 compute-0 podman[401704]: 2025-12-03 18:32:01.961026996 +0000 UTC m=+0.109323447 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 18:32:01 compute-0 podman[401703]: 2025-12-03 18:32:01.963848065 +0000 UTC m=+0.104883349 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:32:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:04 compute-0 podman[401762]: 2025-12-03 18:32:04.952021119 +0000 UTC m=+0.117599708 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:32:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:07 compute-0 podman[401779]: 2025-12-03 18:32:07.957783134 +0000 UTC m=+0.122113119 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-container, distribution-scope=public, release=1214.1726694543, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_id=edpm, build-date=2024-09-18T21:23:30, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 18:32:07 compute-0 podman[401780]: 2025-12-03 18:32:07.965229875 +0000 UTC m=+0.115556119 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 18:32:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:32:13 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:32:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:32:13 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:32:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:32:13 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev ff6e61c3-3780-4ab6-a1e6-8349ac85a588 does not exist
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e3846f9c-f94e-4cb2-a0ef-0c4d67fbbdb2 does not exist
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 036e0d32-fdcd-419b-b4c5-d0511f66b7ab does not exist
Dec  3 18:32:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:32:13 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:32:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:32:13 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:32:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:32:13 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:32:13
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'vms', 'default.rgw.control']
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:32:13 compute-0 podman[401967]: 2025-12-03 18:32:13.921316601 +0000 UTC m=+0.139587124 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:32:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:32:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:32:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:32:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:32:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:32:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:32:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:32:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:32:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:32:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:32:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:32:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:32:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:32:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:32:14 compute-0 podman[402100]: 2025-12-03 18:32:14.771717955 +0000 UTC m=+0.079773826 container create 048a05d2fa4df060cb1f8b0f16ad222fe29c501633ab1cfbe755db88a4621f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 18:32:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:14 compute-0 podman[402100]: 2025-12-03 18:32:14.746346117 +0000 UTC m=+0.054401998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:32:14 compute-0 systemd[1]: Started libpod-conmon-048a05d2fa4df060cb1f8b0f16ad222fe29c501633ab1cfbe755db88a4621f3f.scope.
Dec  3 18:32:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:32:14 compute-0 podman[402100]: 2025-12-03 18:32:14.920733019 +0000 UTC m=+0.228788880 container init 048a05d2fa4df060cb1f8b0f16ad222fe29c501633ab1cfbe755db88a4621f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:32:14 compute-0 podman[402100]: 2025-12-03 18:32:14.939902446 +0000 UTC m=+0.247958327 container start 048a05d2fa4df060cb1f8b0f16ad222fe29c501633ab1cfbe755db88a4621f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:32:14 compute-0 podman[402100]: 2025-12-03 18:32:14.947281826 +0000 UTC m=+0.255337697 container attach 048a05d2fa4df060cb1f8b0f16ad222fe29c501633ab1cfbe755db88a4621f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:32:14 compute-0 trusting_edison[402115]: 167 167
Dec  3 18:32:14 compute-0 systemd[1]: libpod-048a05d2fa4df060cb1f8b0f16ad222fe29c501633ab1cfbe755db88a4621f3f.scope: Deactivated successfully.
Dec  3 18:32:14 compute-0 podman[402100]: 2025-12-03 18:32:14.954171474 +0000 UTC m=+0.262227365 container died 048a05d2fa4df060cb1f8b0f16ad222fe29c501633ab1cfbe755db88a4621f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 18:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-27e8ee3971672a63b018061f40c7a1f5da780b1022b2e74dafd199b4c58157fa-merged.mount: Deactivated successfully.
Dec  3 18:32:15 compute-0 podman[402100]: 2025-12-03 18:32:15.034253187 +0000 UTC m=+0.342309068 container remove 048a05d2fa4df060cb1f8b0f16ad222fe29c501633ab1cfbe755db88a4621f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:32:15 compute-0 systemd[1]: libpod-conmon-048a05d2fa4df060cb1f8b0f16ad222fe29c501633ab1cfbe755db88a4621f3f.scope: Deactivated successfully.
Dec  3 18:32:15 compute-0 podman[402137]: 2025-12-03 18:32:15.291094748 +0000 UTC m=+0.092687490 container create 512e471e76b82aa1a7d32b333451e7794ff6139661fe04b285dd1afee27e0d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:32:15 compute-0 systemd[1]: Started libpod-conmon-512e471e76b82aa1a7d32b333451e7794ff6139661fe04b285dd1afee27e0d0a.scope.
Dec  3 18:32:15 compute-0 podman[402137]: 2025-12-03 18:32:15.266616132 +0000 UTC m=+0.068208894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:32:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ff1e0fa0956e146fe3d1229e2edf725cbe6e58076d1523c33f1b68afa57d96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ff1e0fa0956e146fe3d1229e2edf725cbe6e58076d1523c33f1b68afa57d96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ff1e0fa0956e146fe3d1229e2edf725cbe6e58076d1523c33f1b68afa57d96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ff1e0fa0956e146fe3d1229e2edf725cbe6e58076d1523c33f1b68afa57d96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ff1e0fa0956e146fe3d1229e2edf725cbe6e58076d1523c33f1b68afa57d96/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:15 compute-0 podman[402137]: 2025-12-03 18:32:15.44784471 +0000 UTC m=+0.249437472 container init 512e471e76b82aa1a7d32b333451e7794ff6139661fe04b285dd1afee27e0d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:32:15 compute-0 podman[402137]: 2025-12-03 18:32:15.458432429 +0000 UTC m=+0.260025131 container start 512e471e76b82aa1a7d32b333451e7794ff6139661fe04b285dd1afee27e0d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:32:15 compute-0 podman[402137]: 2025-12-03 18:32:15.46423221 +0000 UTC m=+0.265824952 container attach 512e471e76b82aa1a7d32b333451e7794ff6139661fe04b285dd1afee27e0d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:32:16 compute-0 boring_lamarr[402153]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:32:16 compute-0 boring_lamarr[402153]: --> relative data size: 1.0
Dec  3 18:32:16 compute-0 boring_lamarr[402153]: --> All data devices are unavailable
Dec  3 18:32:16 compute-0 systemd[1]: libpod-512e471e76b82aa1a7d32b333451e7794ff6139661fe04b285dd1afee27e0d0a.scope: Deactivated successfully.
Dec  3 18:32:16 compute-0 systemd[1]: libpod-512e471e76b82aa1a7d32b333451e7794ff6139661fe04b285dd1afee27e0d0a.scope: Consumed 1.130s CPU time.
Dec  3 18:32:16 compute-0 podman[402137]: 2025-12-03 18:32:16.639668948 +0000 UTC m=+1.441261690 container died 512e471e76b82aa1a7d32b333451e7794ff6139661fe04b285dd1afee27e0d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 18:32:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-48ff1e0fa0956e146fe3d1229e2edf725cbe6e58076d1523c33f1b68afa57d96-merged.mount: Deactivated successfully.
Dec  3 18:32:16 compute-0 podman[402137]: 2025-12-03 18:32:16.743456839 +0000 UTC m=+1.545049551 container remove 512e471e76b82aa1a7d32b333451e7794ff6139661fe04b285dd1afee27e0d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:32:16 compute-0 systemd[1]: libpod-conmon-512e471e76b82aa1a7d32b333451e7794ff6139661fe04b285dd1afee27e0d0a.scope: Deactivated successfully.
Dec  3 18:32:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:17 compute-0 podman[402333]: 2025-12-03 18:32:17.956731261 +0000 UTC m=+0.089978065 container create 066c25411688d4cc0c143ae505d44f21f3b75058d5d62bbcf85eecc76bfae639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:32:18 compute-0 podman[402333]: 2025-12-03 18:32:17.923703395 +0000 UTC m=+0.056950279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:32:18 compute-0 systemd[1]: Started libpod-conmon-066c25411688d4cc0c143ae505d44f21f3b75058d5d62bbcf85eecc76bfae639.scope.
Dec  3 18:32:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:32:18 compute-0 podman[402333]: 2025-12-03 18:32:18.115186764 +0000 UTC m=+0.248433608 container init 066c25411688d4cc0c143ae505d44f21f3b75058d5d62bbcf85eecc76bfae639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:32:18 compute-0 podman[402333]: 2025-12-03 18:32:18.140353037 +0000 UTC m=+0.273599861 container start 066c25411688d4cc0c143ae505d44f21f3b75058d5d62bbcf85eecc76bfae639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:32:18 compute-0 nervous_kilby[402349]: 167 167
Dec  3 18:32:18 compute-0 systemd[1]: libpod-066c25411688d4cc0c143ae505d44f21f3b75058d5d62bbcf85eecc76bfae639.scope: Deactivated successfully.
Dec  3 18:32:18 compute-0 podman[402333]: 2025-12-03 18:32:18.147878112 +0000 UTC m=+0.281124956 container attach 066c25411688d4cc0c143ae505d44f21f3b75058d5d62bbcf85eecc76bfae639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kilby, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:32:18 compute-0 podman[402333]: 2025-12-03 18:32:18.149671505 +0000 UTC m=+0.282918329 container died 066c25411688d4cc0c143ae505d44f21f3b75058d5d62bbcf85eecc76bfae639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kilby, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 18:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f61089449be14431adba76bb3621a2ecebfec6f7b49307e25cf1bb5b98dda09-merged.mount: Deactivated successfully.
Dec  3 18:32:18 compute-0 podman[402333]: 2025-12-03 18:32:18.228392544 +0000 UTC m=+0.361639338 container remove 066c25411688d4cc0c143ae505d44f21f3b75058d5d62bbcf85eecc76bfae639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kilby, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:32:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:18 compute-0 systemd[1]: libpod-conmon-066c25411688d4cc0c143ae505d44f21f3b75058d5d62bbcf85eecc76bfae639.scope: Deactivated successfully.
Dec  3 18:32:18 compute-0 podman[402372]: 2025-12-03 18:32:18.483937745 +0000 UTC m=+0.079432008 container create b40400cc556418a6605bd90d2b171ec66aa77ce6c74662705dc24ba3fa95d843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:32:18 compute-0 podman[402372]: 2025-12-03 18:32:18.449760421 +0000 UTC m=+0.045254734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:32:18 compute-0 systemd[1]: Started libpod-conmon-b40400cc556418a6605bd90d2b171ec66aa77ce6c74662705dc24ba3fa95d843.scope.
Dec  3 18:32:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ad715e0394b1d41892b7345ef0b5e8c20ddacd26f1095c4399c92a555d8cc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ad715e0394b1d41892b7345ef0b5e8c20ddacd26f1095c4399c92a555d8cc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ad715e0394b1d41892b7345ef0b5e8c20ddacd26f1095c4399c92a555d8cc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ad715e0394b1d41892b7345ef0b5e8c20ddacd26f1095c4399c92a555d8cc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:18 compute-0 podman[402372]: 2025-12-03 18:32:18.643973777 +0000 UTC m=+0.239468070 container init b40400cc556418a6605bd90d2b171ec66aa77ce6c74662705dc24ba3fa95d843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jepsen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:32:18 compute-0 podman[402372]: 2025-12-03 18:32:18.65723899 +0000 UTC m=+0.252733223 container start b40400cc556418a6605bd90d2b171ec66aa77ce6c74662705dc24ba3fa95d843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jepsen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:32:18 compute-0 podman[402372]: 2025-12-03 18:32:18.666003364 +0000 UTC m=+0.261497597 container attach b40400cc556418a6605bd90d2b171ec66aa77ce6c74662705dc24ba3fa95d843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:32:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]: {
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:    "0": [
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:        {
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "devices": [
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "/dev/loop3"
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            ],
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_name": "ceph_lv0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_size": "21470642176",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "name": "ceph_lv0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "tags": {
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.cluster_name": "ceph",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.crush_device_class": "",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.encrypted": "0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.osd_id": "0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.type": "block",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.vdo": "0"
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            },
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "type": "block",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "vg_name": "ceph_vg0"
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:        }
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:    ],
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:    "1": [
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:        {
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "devices": [
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "/dev/loop4"
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            ],
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_name": "ceph_lv1",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_size": "21470642176",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "name": "ceph_lv1",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "tags": {
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.cluster_name": "ceph",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.crush_device_class": "",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.encrypted": "0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.osd_id": "1",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.type": "block",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.vdo": "0"
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            },
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "type": "block",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "vg_name": "ceph_vg1"
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:        }
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:    ],
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:    "2": [
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:        {
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "devices": [
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "/dev/loop5"
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            ],
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_name": "ceph_lv2",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_size": "21470642176",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "name": "ceph_lv2",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "tags": {
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.cluster_name": "ceph",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.crush_device_class": "",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.encrypted": "0",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.osd_id": "2",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.type": "block",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:                "ceph.vdo": "0"
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            },
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "type": "block",
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:            "vg_name": "ceph_vg2"
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:        }
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]:    ]
Dec  3 18:32:19 compute-0 naughty_jepsen[402387]: }
Dec  3 18:32:19 compute-0 systemd[1]: libpod-b40400cc556418a6605bd90d2b171ec66aa77ce6c74662705dc24ba3fa95d843.scope: Deactivated successfully.
Dec  3 18:32:19 compute-0 podman[402372]: 2025-12-03 18:32:19.495133509 +0000 UTC m=+1.090627742 container died b40400cc556418a6605bd90d2b171ec66aa77ce6c74662705dc24ba3fa95d843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jepsen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8ad715e0394b1d41892b7345ef0b5e8c20ddacd26f1095c4399c92a555d8cc0-merged.mount: Deactivated successfully.
Dec  3 18:32:19 compute-0 podman[402372]: 2025-12-03 18:32:19.615354551 +0000 UTC m=+1.210848824 container remove b40400cc556418a6605bd90d2b171ec66aa77ce6c74662705dc24ba3fa95d843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jepsen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:32:19 compute-0 systemd[1]: libpod-conmon-b40400cc556418a6605bd90d2b171ec66aa77ce6c74662705dc24ba3fa95d843.scope: Deactivated successfully.
Dec  3 18:32:19 compute-0 podman[402406]: 2025-12-03 18:32:19.694692754 +0000 UTC m=+0.143597021 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 18:32:19 compute-0 podman[402399]: 2025-12-03 18:32:19.743951625 +0000 UTC m=+0.194242966 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  3 18:32:20 compute-0 podman[402586]: 2025-12-03 18:32:20.645011865 +0000 UTC m=+0.081728554 container create 20594acc40926ad8bb39b0d36e151962cf0d18b8393d49c06ae77c7db0f03348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keldysh, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:32:20 compute-0 podman[402586]: 2025-12-03 18:32:20.612226016 +0000 UTC m=+0.048942675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:32:20 compute-0 systemd[1]: Started libpod-conmon-20594acc40926ad8bb39b0d36e151962cf0d18b8393d49c06ae77c7db0f03348.scope.
Dec  3 18:32:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:32:20 compute-0 podman[402586]: 2025-12-03 18:32:20.792231164 +0000 UTC m=+0.228947893 container init 20594acc40926ad8bb39b0d36e151962cf0d18b8393d49c06ae77c7db0f03348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keldysh, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 18:32:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:20 compute-0 podman[402586]: 2025-12-03 18:32:20.80968065 +0000 UTC m=+0.246397319 container start 20594acc40926ad8bb39b0d36e151962cf0d18b8393d49c06ae77c7db0f03348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:32:20 compute-0 podman[402586]: 2025-12-03 18:32:20.815358018 +0000 UTC m=+0.252074687 container attach 20594acc40926ad8bb39b0d36e151962cf0d18b8393d49c06ae77c7db0f03348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:32:20 compute-0 naughty_keldysh[402602]: 167 167
Dec  3 18:32:20 compute-0 systemd[1]: libpod-20594acc40926ad8bb39b0d36e151962cf0d18b8393d49c06ae77c7db0f03348.scope: Deactivated successfully.
Dec  3 18:32:20 compute-0 podman[402586]: 2025-12-03 18:32:20.820415521 +0000 UTC m=+0.257132220 container died 20594acc40926ad8bb39b0d36e151962cf0d18b8393d49c06ae77c7db0f03348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keldysh, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 18:32:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-eefa7d5946bcc10428ff29bd44f892eb7f2fb455dbf897bc4e13ae6d328ed392-merged.mount: Deactivated successfully.
Dec  3 18:32:20 compute-0 podman[402586]: 2025-12-03 18:32:20.89461559 +0000 UTC m=+0.331332239 container remove 20594acc40926ad8bb39b0d36e151962cf0d18b8393d49c06ae77c7db0f03348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keldysh, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:32:20 compute-0 systemd[1]: libpod-conmon-20594acc40926ad8bb39b0d36e151962cf0d18b8393d49c06ae77c7db0f03348.scope: Deactivated successfully.
Dec  3 18:32:21 compute-0 podman[402625]: 2025-12-03 18:32:21.186660481 +0000 UTC m=+0.082140394 container create 1c1401e3cb02607d07a771f890d921a21b2807cba91928554fd117f8b0e20395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_napier, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:32:21 compute-0 systemd[1]: Started libpod-conmon-1c1401e3cb02607d07a771f890d921a21b2807cba91928554fd117f8b0e20395.scope.
Dec  3 18:32:21 compute-0 podman[402625]: 2025-12-03 18:32:21.162292916 +0000 UTC m=+0.057772919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:32:21 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/658c3365f7fd7263fe0556e9acca3e109c87b025933f5f973dace811b2912d53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/658c3365f7fd7263fe0556e9acca3e109c87b025933f5f973dace811b2912d53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/658c3365f7fd7263fe0556e9acca3e109c87b025933f5f973dace811b2912d53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/658c3365f7fd7263fe0556e9acca3e109c87b025933f5f973dace811b2912d53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:32:21 compute-0 podman[402625]: 2025-12-03 18:32:21.330715493 +0000 UTC m=+0.226195426 container init 1c1401e3cb02607d07a771f890d921a21b2807cba91928554fd117f8b0e20395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 18:32:21 compute-0 podman[402625]: 2025-12-03 18:32:21.348933887 +0000 UTC m=+0.244413790 container start 1c1401e3cb02607d07a771f890d921a21b2807cba91928554fd117f8b0e20395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_napier, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:32:21 compute-0 podman[402625]: 2025-12-03 18:32:21.35358656 +0000 UTC m=+0.249066473 container attach 1c1401e3cb02607d07a771f890d921a21b2807cba91928554fd117f8b0e20395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_napier, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:32:22 compute-0 admiring_napier[402641]: {
Dec  3 18:32:22 compute-0 admiring_napier[402641]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "osd_id": 1,
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "type": "bluestore"
Dec  3 18:32:22 compute-0 admiring_napier[402641]:    },
Dec  3 18:32:22 compute-0 admiring_napier[402641]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "osd_id": 2,
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "type": "bluestore"
Dec  3 18:32:22 compute-0 admiring_napier[402641]:    },
Dec  3 18:32:22 compute-0 admiring_napier[402641]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "osd_id": 0,
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:32:22 compute-0 admiring_napier[402641]:        "type": "bluestore"
Dec  3 18:32:22 compute-0 admiring_napier[402641]:    }
Dec  3 18:32:22 compute-0 admiring_napier[402641]: }
Dec  3 18:32:22 compute-0 systemd[1]: libpod-1c1401e3cb02607d07a771f890d921a21b2807cba91928554fd117f8b0e20395.scope: Deactivated successfully.
Dec  3 18:32:22 compute-0 systemd[1]: libpod-1c1401e3cb02607d07a771f890d921a21b2807cba91928554fd117f8b0e20395.scope: Consumed 1.157s CPU time.
Dec  3 18:32:22 compute-0 podman[402625]: 2025-12-03 18:32:22.501860167 +0000 UTC m=+1.397340080 container died 1c1401e3cb02607d07a771f890d921a21b2807cba91928554fd117f8b0e20395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_napier, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-658c3365f7fd7263fe0556e9acca3e109c87b025933f5f973dace811b2912d53-merged.mount: Deactivated successfully.
Dec  3 18:32:22 compute-0 podman[402625]: 2025-12-03 18:32:22.573710109 +0000 UTC m=+1.469190022 container remove 1c1401e3cb02607d07a771f890d921a21b2807cba91928554fd117f8b0e20395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:32:22 compute-0 systemd[1]: libpod-conmon-1c1401e3cb02607d07a771f890d921a21b2807cba91928554fd117f8b0e20395.scope: Deactivated successfully.
Dec  3 18:32:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:32:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:32:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:32:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:32:22 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3c1da285-2e63-4b65-a400-9514cd1b9cd2 does not exist
Dec  3 18:32:22 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d4a48889-c137-4336-8764-838f0af9577f does not exist
Dec  3 18:32:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:32:23.321 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:32:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:32:23.323 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:32:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:32:23.323 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:32:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:32:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:32:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:32:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4606 writes, 20K keys, 4606 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4606 writes, 4606 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1306 writes, 5673 keys, 1306 commit groups, 1.0 writes per commit group, ingest: 8.54 MB, 0.01 MB/s#012Interval WAL: 1306 writes, 1306 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     96.6      0.23              0.10        11    0.021       0      0       0.0       0.0#012  L6      1/0    6.83 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     93.9     77.1      0.90              0.28        10    0.090     43K   5265       0.0       0.0#012 Sum      1/0    6.83 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     75.0     81.0      1.13              0.38        21    0.054     43K   5265       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3     99.3     99.4      0.36              0.12         8    0.045     18K   2064       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     93.9     77.1      0.90              0.28        10    0.090     43K   5265       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     99.7      0.22              0.10        10    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.022, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 1.1 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55911062f1f0#2 capacity: 308.00 MB usage: 6.35 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(411,5.99 MB,1.9459%) FilterBlock(22,127.80 KB,0.04052%) IndexBlock(22,242.20 KB,0.0767943%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 18:32:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:29 compute-0 podman[158200]: time="2025-12-03T18:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:32:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:32:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8119 "" "Go-http-client/1.1"
Dec  3 18:32:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:31 compute-0 openstack_network_exporter[365222]: ERROR   18:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:32:31 compute-0 openstack_network_exporter[365222]: ERROR   18:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:32:31 compute-0 openstack_network_exporter[365222]: ERROR   18:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:32:31 compute-0 openstack_network_exporter[365222]: ERROR   18:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:32:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:32:31 compute-0 openstack_network_exporter[365222]: ERROR   18:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:32:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:32:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:32 compute-0 podman[402738]: 2025-12-03 18:32:32.968165922 +0000 UTC m=+0.125121462 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, io.buildah.version=1.33.7)
Dec  3 18:32:32 compute-0 podman[402736]: 2025-12-03 18:32:32.976236859 +0000 UTC m=+0.132485291 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:32:32 compute-0 podman[402737]: 2025-12-03 18:32:32.980517903 +0000 UTC m=+0.138088327 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:32:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:33.951530) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786753951606, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1390, "num_deletes": 251, "total_data_size": 2163387, "memory_usage": 2200544, "flush_reason": "Manual Compaction"}
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786753969842, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2131788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19523, "largest_seqno": 20912, "table_properties": {"data_size": 2125292, "index_size": 3695, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13556, "raw_average_key_size": 19, "raw_value_size": 2112235, "raw_average_value_size": 3083, "num_data_blocks": 169, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764786609, "oldest_key_time": 1764786609, "file_creation_time": 1764786753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 18419 microseconds, and 10365 cpu microseconds.
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:33.969938) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2131788 bytes OK
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:33.969975) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:33.973367) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:33.973388) EVENT_LOG_v1 {"time_micros": 1764786753973381, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:33.973412) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2157222, prev total WAL file size 2157222, number of live WAL files 2.
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:33.975094) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2081KB)], [47(6992KB)]
Dec  3 18:32:33 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786753975147, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9291782, "oldest_snapshot_seqno": -1}
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4295 keys, 7553115 bytes, temperature: kUnknown
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786754058086, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7553115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7523374, "index_size": 17919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 106221, "raw_average_key_size": 24, "raw_value_size": 7444473, "raw_average_value_size": 1733, "num_data_blocks": 753, "num_entries": 4295, "num_filter_entries": 4295, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764786753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:34.059285) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7553115 bytes
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:34.062995) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.8 rd, 90.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.8 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(7.9) write-amplify(3.5) OK, records in: 4809, records dropped: 514 output_compression: NoCompression
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:34.063034) EVENT_LOG_v1 {"time_micros": 1764786754063016, "job": 24, "event": "compaction_finished", "compaction_time_micros": 83882, "compaction_time_cpu_micros": 35488, "output_level": 6, "num_output_files": 1, "total_output_size": 7553115, "num_input_records": 4809, "num_output_records": 4295, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786754064149, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786754067107, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:33.974789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:34.067346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:34.067354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:34.067359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:34.067363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:32:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:32:34.067368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:32:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:35 compute-0 podman[402796]: 2025-12-03 18:32:35.977073733 +0000 UTC m=+0.132471870 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  3 18:32:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:32:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/9027289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:32:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:32:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/9027289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:32:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:39 compute-0 podman[402814]: 2025-12-03 18:32:39.002303832 +0000 UTC m=+0.155607515 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler)
Dec  3 18:32:39 compute-0 podman[402815]: 2025-12-03 18:32:39.003093172 +0000 UTC m=+0.148842081 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm)
Dec  3 18:32:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:32:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:32:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:32:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:32:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:32:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:32:44 compute-0 podman[402855]: 2025-12-03 18:32:44.801874932 +0000 UTC m=+0.107026040 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:32:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:46 compute-0 systemd-logind[784]: New session 61 of user zuul.
Dec  3 18:32:46 compute-0 systemd[1]: Started Session 61 of User zuul.
Dec  3 18:32:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:48 compute-0 python3[403057]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:32:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:50 compute-0 podman[403166]: 2025-12-03 18:32:50.028429763 +0000 UTC m=+0.167661289 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 18:32:50 compute-0 podman[403165]: 2025-12-03 18:32:50.035979217 +0000 UTC m=+0.184920510 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  3 18:32:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:50 compute-0 python3[403336]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:32:52 compute-0 python3[403489]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:32:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:55 compute-0 python3[403640]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 18:32:56 compute-0 nova_compute[348325]: 2025-12-03 18:32:56.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:32:56 compute-0 nova_compute[348325]: 2025-12-03 18:32:56.489 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:32:56 compute-0 python3[403793]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 18:32:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:57 compute-0 nova_compute[348325]: 2025-12-03 18:32:57.480 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:32:57 compute-0 nova_compute[348325]: 2025-12-03 18:32:57.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:32:57 compute-0 nova_compute[348325]: 2025-12-03 18:32:57.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:32:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:32:58 compute-0 nova_compute[348325]: 2025-12-03 18:32:58.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:32:58 compute-0 nova_compute[348325]: 2025-12-03 18:32:58.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:32:58 compute-0 nova_compute[348325]: 2025-12-03 18:32:58.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:32:58 compute-0 nova_compute[348325]: 2025-12-03 18:32:58.512 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 18:32:58 compute-0 nova_compute[348325]: 2025-12-03 18:32:58.513 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:32:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:32:59 compute-0 nova_compute[348325]: 2025-12-03 18:32:59.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:32:59 compute-0 nova_compute[348325]: 2025-12-03 18:32:59.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:32:59 compute-0 python3[404028]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:32:59 compute-0 podman[158200]: time="2025-12-03T18:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:32:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:32:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8125 "" "Go-http-client/1.1"
Dec  3 18:33:00 compute-0 nova_compute[348325]: 2025-12-03 18:33:00.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:33:00 compute-0 nova_compute[348325]: 2025-12-03 18:33:00.518 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:33:00 compute-0 nova_compute[348325]: 2025-12-03 18:33:00.519 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:33:00 compute-0 nova_compute[348325]: 2025-12-03 18:33:00.519 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:33:00 compute-0 nova_compute[348325]: 2025-12-03 18:33:00.520 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:33:00 compute-0 nova_compute[348325]: 2025-12-03 18:33:00.521 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:33:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:33:00 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609603213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:33:00 compute-0 nova_compute[348325]: 2025-12-03 18:33:00.974 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:33:01 compute-0 python3[404215]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:33:01 compute-0 openstack_network_exporter[365222]: ERROR   18:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:33:01 compute-0 openstack_network_exporter[365222]: ERROR   18:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:33:01 compute-0 openstack_network_exporter[365222]: ERROR   18:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:33:01 compute-0 openstack_network_exporter[365222]: ERROR   18:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:33:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:33:01 compute-0 openstack_network_exporter[365222]: ERROR   18:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:33:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:33:01 compute-0 nova_compute[348325]: 2025-12-03 18:33:01.449 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:33:01 compute-0 nova_compute[348325]: 2025-12-03 18:33:01.451 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4563MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:33:01 compute-0 nova_compute[348325]: 2025-12-03 18:33:01.451 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:33:01 compute-0 nova_compute[348325]: 2025-12-03 18:33:01.452 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:33:01 compute-0 nova_compute[348325]: 2025-12-03 18:33:01.530 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:33:01 compute-0 nova_compute[348325]: 2025-12-03 18:33:01.531 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:33:01 compute-0 nova_compute[348325]: 2025-12-03 18:33:01.557 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:33:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:33:02 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/233361828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:33:02 compute-0 nova_compute[348325]: 2025-12-03 18:33:02.071 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:33:02 compute-0 nova_compute[348325]: 2025-12-03 18:33:02.084 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:33:02 compute-0 nova_compute[348325]: 2025-12-03 18:33:02.107 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:33:02 compute-0 nova_compute[348325]: 2025-12-03 18:33:02.109 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:33:02 compute-0 nova_compute[348325]: 2025-12-03 18:33:02.110 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:33:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:03 compute-0 podman[404280]: 2025-12-03 18:33:03.952158399 +0000 UTC m=+0.104301795 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:33:03 compute-0 podman[404281]: 2025-12-03 18:33:03.976948692 +0000 UTC m=+0.120407226 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, version=9.6)
Dec  3 18:33:04 compute-0 podman[404279]: 2025-12-03 18:33:04.004013283 +0000 UTC m=+0.153964445 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:33:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:06 compute-0 podman[404339]: 2025-12-03 18:33:06.921553488 +0000 UTC m=+0.083346823 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  3 18:33:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:09 compute-0 podman[404357]: 2025-12-03 18:33:09.941706403 +0000 UTC m=+0.097813555 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.29.0, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec  3 18:33:09 compute-0 podman[404358]: 2025-12-03 18:33:09.961040985 +0000 UTC m=+0.116642735 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec  3 18:33:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.245 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.246 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.247 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.248 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.248 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.252 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.253 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.253 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.253 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.254 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.254 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.255 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.255 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.255 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.255 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.256 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.257 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.257 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.260 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.261 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.265 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.265 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.268 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.268 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.268 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.269 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:33:13.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:33:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:33:13
Dec  3 18:33:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:33:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:33:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.rgw.root', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'volumes']
Dec  3 18:33:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:33:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:33:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:33:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:33:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:33:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:33:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:33:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:33:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:33:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:33:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:33:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:33:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:33:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:33:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:33:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:33:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:33:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:15 compute-0 podman[404394]: 2025-12-03 18:33:15.93138373 +0000 UTC m=+0.092708301 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:33:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:20 compute-0 podman[404419]: 2025-12-03 18:33:20.969700651 +0000 UTC m=+0.130982045 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  3 18:33:20 compute-0 podman[404418]: 2025-12-03 18:33:20.97950819 +0000 UTC m=+0.146419202 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  3 18:33:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:33:23.324 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:33:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:33:23.324 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:33:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:33:23.324 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:33:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:33:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:33:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:33:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:33:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:33:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f71cd8c6-0c93-41e2-9a6a-7a0bffae9fa0 does not exist
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 59a2009e-18eb-4be8-b489-e511ffc3bb2a does not exist
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9a4d63ad-6d98-483d-bfa3-b3b65e416ad8 does not exist
Dec  3 18:33:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:33:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:33:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:33:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:33:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:33:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:33:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:33:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:33:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:33:25 compute-0 podman[404732]: 2025-12-03 18:33:25.31688863 +0000 UTC m=+0.103989947 container create e25b0bf9e3aa9aef2d8e11a586497ad7ec1f81fdd933ae21e3b4d77ba457d871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:33:25 compute-0 podman[404732]: 2025-12-03 18:33:25.268272185 +0000 UTC m=+0.055373572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:33:25 compute-0 systemd[1]: Started libpod-conmon-e25b0bf9e3aa9aef2d8e11a586497ad7ec1f81fdd933ae21e3b4d77ba457d871.scope.
Dec  3 18:33:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:33:25 compute-0 podman[404732]: 2025-12-03 18:33:25.453363217 +0000 UTC m=+0.240464504 container init e25b0bf9e3aa9aef2d8e11a586497ad7ec1f81fdd933ae21e3b4d77ba457d871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:33:25 compute-0 podman[404732]: 2025-12-03 18:33:25.466329353 +0000 UTC m=+0.253430630 container start e25b0bf9e3aa9aef2d8e11a586497ad7ec1f81fdd933ae21e3b4d77ba457d871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:33:25 compute-0 elated_edison[404748]: 167 167
Dec  3 18:33:25 compute-0 systemd[1]: libpod-e25b0bf9e3aa9aef2d8e11a586497ad7ec1f81fdd933ae21e3b4d77ba457d871.scope: Deactivated successfully.
Dec  3 18:33:25 compute-0 podman[404732]: 2025-12-03 18:33:25.476927912 +0000 UTC m=+0.264029189 container attach e25b0bf9e3aa9aef2d8e11a586497ad7ec1f81fdd933ae21e3b4d77ba457d871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:33:25 compute-0 podman[404732]: 2025-12-03 18:33:25.478139152 +0000 UTC m=+0.265240469 container died e25b0bf9e3aa9aef2d8e11a586497ad7ec1f81fdd933ae21e3b4d77ba457d871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:33:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8718efff53b21695cf40200ecfec909e2e18797aafe17ce885613b0d138f739-merged.mount: Deactivated successfully.
Dec  3 18:33:25 compute-0 podman[404732]: 2025-12-03 18:33:25.577257978 +0000 UTC m=+0.364359295 container remove e25b0bf9e3aa9aef2d8e11a586497ad7ec1f81fdd933ae21e3b4d77ba457d871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:33:25 compute-0 systemd[1]: libpod-conmon-e25b0bf9e3aa9aef2d8e11a586497ad7ec1f81fdd933ae21e3b4d77ba457d871.scope: Deactivated successfully.
Dec  3 18:33:25 compute-0 podman[404772]: 2025-12-03 18:33:25.85090789 +0000 UTC m=+0.082568994 container create c465a7da96eee0e9ac78521f66b596f2f83bf97d5b59fadc5d3d5a83575abb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_margulis, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:33:25 compute-0 podman[404772]: 2025-12-03 18:33:25.826617668 +0000 UTC m=+0.058278812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:33:25 compute-0 systemd[1]: Started libpod-conmon-c465a7da96eee0e9ac78521f66b596f2f83bf97d5b59fadc5d3d5a83575abb8c.scope.
Dec  3 18:33:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3661a6ebff68127556fc482fc0e0a1d17866e30dd41c3b96bff8d85f2abcd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3661a6ebff68127556fc482fc0e0a1d17866e30dd41c3b96bff8d85f2abcd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3661a6ebff68127556fc482fc0e0a1d17866e30dd41c3b96bff8d85f2abcd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3661a6ebff68127556fc482fc0e0a1d17866e30dd41c3b96bff8d85f2abcd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3661a6ebff68127556fc482fc0e0a1d17866e30dd41c3b96bff8d85f2abcd8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:26 compute-0 podman[404772]: 2025-12-03 18:33:26.028012648 +0000 UTC m=+0.259673772 container init c465a7da96eee0e9ac78521f66b596f2f83bf97d5b59fadc5d3d5a83575abb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_margulis, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:33:26 compute-0 podman[404772]: 2025-12-03 18:33:26.055390165 +0000 UTC m=+0.287051269 container start c465a7da96eee0e9ac78521f66b596f2f83bf97d5b59fadc5d3d5a83575abb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_margulis, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:33:26 compute-0 podman[404772]: 2025-12-03 18:33:26.061015622 +0000 UTC m=+0.292676736 container attach c465a7da96eee0e9ac78521f66b596f2f83bf97d5b59fadc5d3d5a83575abb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:33:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:27 compute-0 interesting_margulis[404788]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:33:27 compute-0 interesting_margulis[404788]: --> relative data size: 1.0
Dec  3 18:33:27 compute-0 interesting_margulis[404788]: --> All data devices are unavailable
Dec  3 18:33:27 compute-0 systemd[1]: libpod-c465a7da96eee0e9ac78521f66b596f2f83bf97d5b59fadc5d3d5a83575abb8c.scope: Deactivated successfully.
Dec  3 18:33:27 compute-0 systemd[1]: libpod-c465a7da96eee0e9ac78521f66b596f2f83bf97d5b59fadc5d3d5a83575abb8c.scope: Consumed 1.244s CPU time.
Dec  3 18:33:27 compute-0 podman[404772]: 2025-12-03 18:33:27.348141004 +0000 UTC m=+1.579802158 container died c465a7da96eee0e9ac78521f66b596f2f83bf97d5b59fadc5d3d5a83575abb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb3661a6ebff68127556fc482fc0e0a1d17866e30dd41c3b96bff8d85f2abcd8-merged.mount: Deactivated successfully.
Dec  3 18:33:27 compute-0 podman[404772]: 2025-12-03 18:33:27.452880108 +0000 UTC m=+1.684541202 container remove c465a7da96eee0e9ac78521f66b596f2f83bf97d5b59fadc5d3d5a83575abb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 18:33:27 compute-0 systemd[1]: libpod-conmon-c465a7da96eee0e9ac78521f66b596f2f83bf97d5b59fadc5d3d5a83575abb8c.scope: Deactivated successfully.
Dec  3 18:33:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:28 compute-0 podman[404967]: 2025-12-03 18:33:28.430964885 +0000 UTC m=+0.071481203 container create dec59704da7745163478b759ac95e1dd861da0baf38edb9adaa82f5397fe80ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:33:28 compute-0 systemd[1]: Started libpod-conmon-dec59704da7745163478b759ac95e1dd861da0baf38edb9adaa82f5397fe80ab.scope.
Dec  3 18:33:28 compute-0 podman[404967]: 2025-12-03 18:33:28.402577653 +0000 UTC m=+0.043094001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:33:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:33:28 compute-0 podman[404967]: 2025-12-03 18:33:28.542767691 +0000 UTC m=+0.183283979 container init dec59704da7745163478b759ac95e1dd861da0baf38edb9adaa82f5397fe80ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 18:33:28 compute-0 podman[404967]: 2025-12-03 18:33:28.553224166 +0000 UTC m=+0.193740454 container start dec59704da7745163478b759ac95e1dd861da0baf38edb9adaa82f5397fe80ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:33:28 compute-0 podman[404967]: 2025-12-03 18:33:28.558240559 +0000 UTC m=+0.198756887 container attach dec59704da7745163478b759ac95e1dd861da0baf38edb9adaa82f5397fe80ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:33:28 compute-0 romantic_elgamal[404983]: 167 167
Dec  3 18:33:28 compute-0 systemd[1]: libpod-dec59704da7745163478b759ac95e1dd861da0baf38edb9adaa82f5397fe80ab.scope: Deactivated successfully.
Dec  3 18:33:28 compute-0 podman[404967]: 2025-12-03 18:33:28.563955028 +0000 UTC m=+0.204471346 container died dec59704da7745163478b759ac95e1dd861da0baf38edb9adaa82f5397fe80ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-45afb3be39f6ade21db34c570d2f29d4f329c3d5ff39f553fdf9c665e07a1842-merged.mount: Deactivated successfully.
Dec  3 18:33:28 compute-0 podman[404967]: 2025-12-03 18:33:28.642623346 +0000 UTC m=+0.283139674 container remove dec59704da7745163478b759ac95e1dd861da0baf38edb9adaa82f5397fe80ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:33:28 compute-0 systemd[1]: libpod-conmon-dec59704da7745163478b759ac95e1dd861da0baf38edb9adaa82f5397fe80ab.scope: Deactivated successfully.
Dec  3 18:33:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:28 compute-0 podman[405007]: 2025-12-03 18:33:28.930762941 +0000 UTC m=+0.097729334 container create aa5c7e6292a8d7f34437d91a592d83a1d88097c086bf4636f6c1eb1f03f9ef25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:33:28 compute-0 podman[405007]: 2025-12-03 18:33:28.900347249 +0000 UTC m=+0.067313652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:33:29 compute-0 systemd[1]: Started libpod-conmon-aa5c7e6292a8d7f34437d91a592d83a1d88097c086bf4636f6c1eb1f03f9ef25.scope.
Dec  3 18:33:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7699e42d1b44061a9ed18f4fb2ca51a326a976c0cd788e5993aa8cb07dcfb28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7699e42d1b44061a9ed18f4fb2ca51a326a976c0cd788e5993aa8cb07dcfb28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7699e42d1b44061a9ed18f4fb2ca51a326a976c0cd788e5993aa8cb07dcfb28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7699e42d1b44061a9ed18f4fb2ca51a326a976c0cd788e5993aa8cb07dcfb28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:29 compute-0 podman[405007]: 2025-12-03 18:33:29.069650407 +0000 UTC m=+0.236616780 container init aa5c7e6292a8d7f34437d91a592d83a1d88097c086bf4636f6c1eb1f03f9ef25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_meninsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:33:29 compute-0 podman[405007]: 2025-12-03 18:33:29.101065383 +0000 UTC m=+0.268031786 container start aa5c7e6292a8d7f34437d91a592d83a1d88097c086bf4636f6c1eb1f03f9ef25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:33:29 compute-0 podman[405007]: 2025-12-03 18:33:29.108146815 +0000 UTC m=+0.275113228 container attach aa5c7e6292a8d7f34437d91a592d83a1d88097c086bf4636f6c1eb1f03f9ef25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_meninsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:33:29 compute-0 podman[158200]: time="2025-12-03T18:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:33:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44149 "" "Go-http-client/1.1"
Dec  3 18:33:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8540 "" "Go-http-client/1.1"
Dec  3 18:33:29 compute-0 festive_meninsky[405023]: {
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:    "0": [
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:        {
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "devices": [
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "/dev/loop3"
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            ],
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_name": "ceph_lv0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_size": "21470642176",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "name": "ceph_lv0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "tags": {
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.cluster_name": "ceph",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.crush_device_class": "",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.encrypted": "0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.osd_id": "0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.type": "block",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.vdo": "0"
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            },
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "type": "block",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "vg_name": "ceph_vg0"
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:        }
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:    ],
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:    "1": [
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:        {
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "devices": [
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "/dev/loop4"
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            ],
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_name": "ceph_lv1",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_size": "21470642176",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "name": "ceph_lv1",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "tags": {
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.cluster_name": "ceph",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.crush_device_class": "",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.encrypted": "0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.osd_id": "1",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.type": "block",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.vdo": "0"
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            },
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "type": "block",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "vg_name": "ceph_vg1"
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:        }
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:    ],
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:    "2": [
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:        {
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "devices": [
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "/dev/loop5"
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            ],
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_name": "ceph_lv2",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_size": "21470642176",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "name": "ceph_lv2",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "tags": {
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.cluster_name": "ceph",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.crush_device_class": "",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.encrypted": "0",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.osd_id": "2",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.type": "block",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:                "ceph.vdo": "0"
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            },
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "type": "block",
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:            "vg_name": "ceph_vg2"
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:        }
Dec  3 18:33:29 compute-0 festive_meninsky[405023]:    ]
Dec  3 18:33:29 compute-0 festive_meninsky[405023]: }
Dec  3 18:33:29 compute-0 systemd[1]: libpod-aa5c7e6292a8d7f34437d91a592d83a1d88097c086bf4636f6c1eb1f03f9ef25.scope: Deactivated successfully.
Dec  3 18:33:30 compute-0 podman[405032]: 2025-12-03 18:33:30.061044349 +0000 UTC m=+0.061332877 container died aa5c7e6292a8d7f34437d91a592d83a1d88097c086bf4636f6c1eb1f03f9ef25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7699e42d1b44061a9ed18f4fb2ca51a326a976c0cd788e5993aa8cb07dcfb28-merged.mount: Deactivated successfully.
Dec  3 18:33:30 compute-0 podman[405032]: 2025-12-03 18:33:30.145124518 +0000 UTC m=+0.145413006 container remove aa5c7e6292a8d7f34437d91a592d83a1d88097c086bf4636f6c1eb1f03f9ef25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:33:30 compute-0 systemd[1]: libpod-conmon-aa5c7e6292a8d7f34437d91a592d83a1d88097c086bf4636f6c1eb1f03f9ef25.scope: Deactivated successfully.
Dec  3 18:33:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:31 compute-0 openstack_network_exporter[365222]: ERROR   18:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:33:31 compute-0 openstack_network_exporter[365222]: ERROR   18:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:33:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:33:31 compute-0 openstack_network_exporter[365222]: ERROR   18:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:33:31 compute-0 openstack_network_exporter[365222]: ERROR   18:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:33:31 compute-0 openstack_network_exporter[365222]: ERROR   18:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:33:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:33:31 compute-0 podman[405182]: 2025-12-03 18:33:31.427386302 +0000 UTC m=+0.106030816 container create a225691d5c3bde72c459b2f53935ede8b1c990d6b5179a77bfc07f62f1072a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_newton, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:33:31 compute-0 podman[405182]: 2025-12-03 18:33:31.368814464 +0000 UTC m=+0.047459018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:33:31 compute-0 systemd[1]: Started libpod-conmon-a225691d5c3bde72c459b2f53935ede8b1c990d6b5179a77bfc07f62f1072a53.scope.
Dec  3 18:33:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:33:31 compute-0 podman[405182]: 2025-12-03 18:33:31.564375382 +0000 UTC m=+0.243019946 container init a225691d5c3bde72c459b2f53935ede8b1c990d6b5179a77bfc07f62f1072a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_newton, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:33:31 compute-0 podman[405182]: 2025-12-03 18:33:31.583831987 +0000 UTC m=+0.262476501 container start a225691d5c3bde72c459b2f53935ede8b1c990d6b5179a77bfc07f62f1072a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:33:31 compute-0 podman[405182]: 2025-12-03 18:33:31.590605671 +0000 UTC m=+0.269250255 container attach a225691d5c3bde72c459b2f53935ede8b1c990d6b5179a77bfc07f62f1072a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_newton, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:33:31 compute-0 gracious_newton[405197]: 167 167
Dec  3 18:33:31 compute-0 systemd[1]: libpod-a225691d5c3bde72c459b2f53935ede8b1c990d6b5179a77bfc07f62f1072a53.scope: Deactivated successfully.
Dec  3 18:33:31 compute-0 podman[405182]: 2025-12-03 18:33:31.597174151 +0000 UTC m=+0.275818645 container died a225691d5c3bde72c459b2f53935ede8b1c990d6b5179a77bfc07f62f1072a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_newton, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:33:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d17323a513802470aefb413805a0fd490f05516cc25ce47facf72ed750c4c12c-merged.mount: Deactivated successfully.
Dec  3 18:33:31 compute-0 podman[405182]: 2025-12-03 18:33:31.677472809 +0000 UTC m=+0.356117293 container remove a225691d5c3bde72c459b2f53935ede8b1c990d6b5179a77bfc07f62f1072a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:33:31 compute-0 systemd[1]: libpod-conmon-a225691d5c3bde72c459b2f53935ede8b1c990d6b5179a77bfc07f62f1072a53.scope: Deactivated successfully.
Dec  3 18:33:31 compute-0 podman[405220]: 2025-12-03 18:33:31.89366987 +0000 UTC m=+0.070616823 container create c5ebf8c46d9aa21773e9568fdf2addba7d3c6437824941238a2c0bcd845c78a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rosalind, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:33:31 compute-0 podman[405220]: 2025-12-03 18:33:31.857612391 +0000 UTC m=+0.034559394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:33:31 compute-0 systemd[1]: Started libpod-conmon-c5ebf8c46d9aa21773e9568fdf2addba7d3c6437824941238a2c0bcd845c78a2.scope.
Dec  3 18:33:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:33:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ae317422f3ab78a2e51ace55bbfc2fe03e120ed82606ff04a7ce83db621f2d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ae317422f3ab78a2e51ace55bbfc2fe03e120ed82606ff04a7ce83db621f2d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ae317422f3ab78a2e51ace55bbfc2fe03e120ed82606ff04a7ce83db621f2d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ae317422f3ab78a2e51ace55bbfc2fe03e120ed82606ff04a7ce83db621f2d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:33:32 compute-0 podman[405220]: 2025-12-03 18:33:32.067422807 +0000 UTC m=+0.244369730 container init c5ebf8c46d9aa21773e9568fdf2addba7d3c6437824941238a2c0bcd845c78a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rosalind, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:33:32 compute-0 podman[405220]: 2025-12-03 18:33:32.085797855 +0000 UTC m=+0.262744768 container start c5ebf8c46d9aa21773e9568fdf2addba7d3c6437824941238a2c0bcd845c78a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rosalind, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:33:32 compute-0 podman[405220]: 2025-12-03 18:33:32.090671483 +0000 UTC m=+0.267618396 container attach c5ebf8c46d9aa21773e9568fdf2addba7d3c6437824941238a2c0bcd845c78a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:33:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:33 compute-0 musing_rosalind[405236]: {
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "osd_id": 1,
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "type": "bluestore"
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:    },
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "osd_id": 2,
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "type": "bluestore"
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:    },
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "osd_id": 0,
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:        "type": "bluestore"
Dec  3 18:33:33 compute-0 musing_rosalind[405236]:    }
Dec  3 18:33:33 compute-0 musing_rosalind[405236]: }
Dec  3 18:33:33 compute-0 systemd[1]: libpod-c5ebf8c46d9aa21773e9568fdf2addba7d3c6437824941238a2c0bcd845c78a2.scope: Deactivated successfully.
Dec  3 18:33:33 compute-0 systemd[1]: libpod-c5ebf8c46d9aa21773e9568fdf2addba7d3c6437824941238a2c0bcd845c78a2.scope: Consumed 1.244s CPU time.
Dec  3 18:33:33 compute-0 podman[405220]: 2025-12-03 18:33:33.32710082 +0000 UTC m=+1.504047763 container died c5ebf8c46d9aa21773e9568fdf2addba7d3c6437824941238a2c0bcd845c78a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:33:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ae317422f3ab78a2e51ace55bbfc2fe03e120ed82606ff04a7ce83db621f2d2-merged.mount: Deactivated successfully.
Dec  3 18:33:33 compute-0 podman[405220]: 2025-12-03 18:33:33.412119452 +0000 UTC m=+1.589066355 container remove c5ebf8c46d9aa21773e9568fdf2addba7d3c6437824941238a2c0bcd845c78a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 18:33:33 compute-0 systemd[1]: libpod-conmon-c5ebf8c46d9aa21773e9568fdf2addba7d3c6437824941238a2c0bcd845c78a2.scope: Deactivated successfully.
Dec  3 18:33:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:33:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:33:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:33:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:33:33 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4ae8dae4-4e24-49fd-b06b-cd9d8124f528 does not exist
Dec  3 18:33:33 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 35d554ce-7c28-4ec6-9e9d-9521b42ca59e does not exist
Dec  3 18:33:34 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:33:34 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:33:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:34 compute-0 podman[405333]: 2025-12-03 18:33:34.99109415 +0000 UTC m=+0.144122355 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec  3 18:33:34 compute-0 podman[405335]: 2025-12-03 18:33:34.99029512 +0000 UTC m=+0.131896347 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Dec  3 18:33:34 compute-0 podman[405334]: 2025-12-03 18:33:34.996785428 +0000 UTC m=+0.147899887 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:33:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:33:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/558798413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:33:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:33:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/558798413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:33:37 compute-0 podman[405391]: 2025-12-03 18:33:37.972693246 +0000 UTC m=+0.129467768 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 18:33:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:40 compute-0 podman[405409]: 2025-12-03 18:33:40.942702628 +0000 UTC m=+0.105740840 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, maintainer=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, distribution-scope=public, release=1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 18:33:40 compute-0 podman[405410]: 2025-12-03 18:33:40.942274457 +0000 UTC m=+0.106257381 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  3 18:33:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:33:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:33:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:33:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:33:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:33:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:33:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:46 compute-0 podman[405448]: 2025-12-03 18:33:46.956037781 +0000 UTC m=+0.114419791 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:33:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:49 compute-0 nova_compute[348325]: 2025-12-03 18:33:49.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:33:49 compute-0 nova_compute[348325]: 2025-12-03 18:33:49.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 18:33:49 compute-0 nova_compute[348325]: 2025-12-03 18:33:49.507 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 18:33:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:51 compute-0 nova_compute[348325]: 2025-12-03 18:33:51.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:33:51 compute-0 podman[405473]: 2025-12-03 18:33:51.95082296 +0000 UTC m=+0.107916412 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4)
Dec  3 18:33:52 compute-0 podman[405472]: 2025-12-03 18:33:52.030149334 +0000 UTC m=+0.191248164 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 18:33:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:57 compute-0 nova_compute[348325]: 2025-12-03 18:33:57.504 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:33:57 compute-0 nova_compute[348325]: 2025-12-03 18:33:57.505 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:33:57 compute-0 nova_compute[348325]: 2025-12-03 18:33:57.505 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:33:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:33:58 compute-0 nova_compute[348325]: 2025-12-03 18:33:58.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:33:58 compute-0 nova_compute[348325]: 2025-12-03 18:33:58.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:33:58 compute-0 nova_compute[348325]: 2025-12-03 18:33:58.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:33:58 compute-0 nova_compute[348325]: 2025-12-03 18:33:58.518 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 18:33:58 compute-0 nova_compute[348325]: 2025-12-03 18:33:58.519 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:33:58 compute-0 nova_compute[348325]: 2025-12-03 18:33:58.519 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:33:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:33:59 compute-0 nova_compute[348325]: 2025-12-03 18:33:59.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:33:59 compute-0 nova_compute[348325]: 2025-12-03 18:33:59.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:33:59 compute-0 nova_compute[348325]: 2025-12-03 18:33:59.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 18:33:59 compute-0 podman[158200]: time="2025-12-03T18:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:33:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:33:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8119 "" "Go-http-client/1.1"
Dec  3 18:34:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:01 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Dec  3 18:34:01 compute-0 systemd[1]: session-61.scope: Consumed 12.162s CPU time.
Dec  3 18:34:01 compute-0 systemd-logind[784]: Session 61 logged out. Waiting for processes to exit.
Dec  3 18:34:01 compute-0 systemd-logind[784]: Removed session 61.
Dec  3 18:34:01 compute-0 openstack_network_exporter[365222]: ERROR   18:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:34:01 compute-0 openstack_network_exporter[365222]: ERROR   18:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:34:01 compute-0 openstack_network_exporter[365222]: ERROR   18:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:34:01 compute-0 openstack_network_exporter[365222]: ERROR   18:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:34:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:34:01 compute-0 openstack_network_exporter[365222]: ERROR   18:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:34:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:34:01 compute-0 nova_compute[348325]: 2025-12-03 18:34:01.513 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:34:01 compute-0 nova_compute[348325]: 2025-12-03 18:34:01.514 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:34:02 compute-0 nova_compute[348325]: 2025-12-03 18:34:02.478 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:34:02 compute-0 nova_compute[348325]: 2025-12-03 18:34:02.502 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:34:02 compute-0 nova_compute[348325]: 2025-12-03 18:34:02.528 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:34:02 compute-0 nova_compute[348325]: 2025-12-03 18:34:02.528 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:34:02 compute-0 nova_compute[348325]: 2025-12-03 18:34:02.528 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:34:02 compute-0 nova_compute[348325]: 2025-12-03 18:34:02.529 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:34:02 compute-0 nova_compute[348325]: 2025-12-03 18:34:02.529 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:34:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:34:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2087966751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:34:03 compute-0 nova_compute[348325]: 2025-12-03 18:34:03.027 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:34:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:03 compute-0 nova_compute[348325]: 2025-12-03 18:34:03.513 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:34:03 compute-0 nova_compute[348325]: 2025-12-03 18:34:03.515 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4574MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:34:03 compute-0 nova_compute[348325]: 2025-12-03 18:34:03.515 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:34:03 compute-0 nova_compute[348325]: 2025-12-03 18:34:03.515 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:34:03 compute-0 nova_compute[348325]: 2025-12-03 18:34:03.871 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:34:03 compute-0 nova_compute[348325]: 2025-12-03 18:34:03.872 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:34:03 compute-0 nova_compute[348325]: 2025-12-03 18:34:03.940 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing inventories for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 18:34:04 compute-0 nova_compute[348325]: 2025-12-03 18:34:04.003 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating ProviderTree inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 18:34:04 compute-0 nova_compute[348325]: 2025-12-03 18:34:04.003 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 18:34:04 compute-0 nova_compute[348325]: 2025-12-03 18:34:04.019 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing aggregate associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 18:34:04 compute-0 nova_compute[348325]: 2025-12-03 18:34:04.041 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing trait associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, traits: COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_AVX2,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 18:34:04 compute-0 nova_compute[348325]: 2025-12-03 18:34:04.058 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:34:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:34:04 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660634059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:34:04 compute-0 nova_compute[348325]: 2025-12-03 18:34:04.602 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:34:04 compute-0 nova_compute[348325]: 2025-12-03 18:34:04.611 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:34:04 compute-0 nova_compute[348325]: 2025-12-03 18:34:04.634 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:34:04 compute-0 nova_compute[348325]: 2025-12-03 18:34:04.635 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:34:04 compute-0 nova_compute[348325]: 2025-12-03 18:34:04.636 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:34:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:05 compute-0 podman[405563]: 2025-12-03 18:34:05.93122614 +0000 UTC m=+0.091053241 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:34:05 compute-0 podman[405562]: 2025-12-03 18:34:05.947666161 +0000 UTC m=+0.112063133 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:34:05 compute-0 podman[405564]: 2025-12-03 18:34:05.977202721 +0000 UTC m=+0.129977060 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 18:34:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:08 compute-0 podman[405628]: 2025-12-03 18:34:08.620883217 +0000 UTC m=+0.078635098 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 18:34:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:11 compute-0 podman[405651]: 2025-12-03 18:34:11.957165641 +0000 UTC m=+0.114677327 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 18:34:11 compute-0 podman[405650]: 2025-12-03 18:34:11.968490867 +0000 UTC m=+0.122069717 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, name=ubi9, config_id=edpm, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler)
Dec  3 18:34:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:34:13
Dec  3 18:34:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:34:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:34:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'backups', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images']
Dec  3 18:34:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:34:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:34:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:34:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:34:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:34:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:34:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:34:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:34:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:34:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:34:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:34:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:34:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:34:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:34:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:34:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:34:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:34:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:15 compute-0 nova_compute[348325]: 2025-12-03 18:34:15.673 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:34:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:17 compute-0 podman[405688]: 2025-12-03 18:34:17.920686179 +0000 UTC m=+0.094736370 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:34:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:34:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 5812 writes, 24K keys, 5812 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5812 writes, 978 syncs, 5.94 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 18:34:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:22 compute-0 podman[405712]: 2025-12-03 18:34:22.973126436 +0000 UTC m=+0.129604630 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 18:34:23 compute-0 podman[405711]: 2025-12-03 18:34:23.064553386 +0000 UTC m=+0.231195638 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  3 18:34:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:34:23.326 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:34:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:34:23.329 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:34:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:34:23.329 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:34:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:34:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:34:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 6956 writes, 27K keys, 6956 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6956 writes, 1328 syncs, 5.24 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 18:34:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:29 compute-0 podman[158200]: time="2025-12-03T18:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:34:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:34:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8129 "" "Go-http-client/1.1"
Dec  3 18:34:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:31 compute-0 openstack_network_exporter[365222]: ERROR   18:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:34:31 compute-0 openstack_network_exporter[365222]: ERROR   18:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:34:31 compute-0 openstack_network_exporter[365222]: ERROR   18:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:34:31 compute-0 openstack_network_exporter[365222]: ERROR   18:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:34:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:34:31 compute-0 openstack_network_exporter[365222]: ERROR   18:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:34:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:34:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:34:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 5886 writes, 24K keys, 5886 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5886 writes, 1001 syncs, 5.88 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 18:34:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:34:35 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:34:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:34:35 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:34:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:34:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:34:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:34:35 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:34:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:34:35 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:34:35 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 16101ae9-5fb3-46ef-8c73-a91d1af94a44 does not exist
Dec  3 18:34:35 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev ae00dc39-aca2-44ac-8a3d-c7e0faccc883 does not exist
Dec  3 18:34:35 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f2c1bf79-444e-48c9-84b7-25b05ad8c777 does not exist
Dec  3 18:34:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:34:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:34:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:34:35 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:34:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:34:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:34:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:34:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:34:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:34:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:34:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:34:36 compute-0 podman[406078]: 2025-12-03 18:34:36.18180801 +0000 UTC m=+0.103560936 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  3 18:34:36 compute-0 podman[406077]: 2025-12-03 18:34:36.188154395 +0000 UTC m=+0.110618468 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:34:36 compute-0 podman[406076]: 2025-12-03 18:34:36.190485212 +0000 UTC m=+0.129589710 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:34:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:37 compute-0 podman[406199]: 2025-12-03 18:34:37.036920439 +0000 UTC m=+0.078935055 container create e967e0923b64d3a7113c6350d848ec43a1072945d5bc0d34e542295b969bb1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bassi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:34:37 compute-0 podman[406199]: 2025-12-03 18:34:37.004773436 +0000 UTC m=+0.046788132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:34:37 compute-0 systemd[1]: Started libpod-conmon-e967e0923b64d3a7113c6350d848ec43a1072945d5bc0d34e542295b969bb1ba.scope.
Dec  3 18:34:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:34:37 compute-0 podman[406199]: 2025-12-03 18:34:37.179852604 +0000 UTC m=+0.221867240 container init e967e0923b64d3a7113c6350d848ec43a1072945d5bc0d34e542295b969bb1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bassi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 18:34:37 compute-0 podman[406199]: 2025-12-03 18:34:37.193930737 +0000 UTC m=+0.235945353 container start e967e0923b64d3a7113c6350d848ec43a1072945d5bc0d34e542295b969bb1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:34:37 compute-0 podman[406199]: 2025-12-03 18:34:37.198291904 +0000 UTC m=+0.240306570 container attach e967e0923b64d3a7113c6350d848ec43a1072945d5bc0d34e542295b969bb1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bassi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 18:34:37 compute-0 loving_bassi[406215]: 167 167
Dec  3 18:34:37 compute-0 systemd[1]: libpod-e967e0923b64d3a7113c6350d848ec43a1072945d5bc0d34e542295b969bb1ba.scope: Deactivated successfully.
Dec  3 18:34:37 compute-0 conmon[406215]: conmon e967e0923b64d3a7113c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e967e0923b64d3a7113c6350d848ec43a1072945d5bc0d34e542295b969bb1ba.scope/container/memory.events
Dec  3 18:34:37 compute-0 podman[406199]: 2025-12-03 18:34:37.206655838 +0000 UTC m=+0.248670524 container died e967e0923b64d3a7113c6350d848ec43a1072945d5bc0d34e542295b969bb1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bassi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 18:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1911c8685d482bc7ba44c79f6ef4c5a1f5bb52c79ac018348f39e47883b4413c-merged.mount: Deactivated successfully.
Dec  3 18:34:37 compute-0 podman[406199]: 2025-12-03 18:34:37.267217074 +0000 UTC m=+0.309231690 container remove e967e0923b64d3a7113c6350d848ec43a1072945d5bc0d34e542295b969bb1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 18:34:37 compute-0 systemd[1]: libpod-conmon-e967e0923b64d3a7113c6350d848ec43a1072945d5bc0d34e542295b969bb1ba.scope: Deactivated successfully.
Dec  3 18:34:37 compute-0 ceph-mgr[193091]: [devicehealth INFO root] Check health
Dec  3 18:34:37 compute-0 podman[406238]: 2025-12-03 18:34:37.509328707 +0000 UTC m=+0.093100600 container create 10618a4fa24c7caf3c042076fac39fde5da0233cd42bce04915635e2933cab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:34:37 compute-0 podman[406238]: 2025-12-03 18:34:37.472606292 +0000 UTC m=+0.056378275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:34:37 compute-0 systemd[1]: Started libpod-conmon-10618a4fa24c7caf3c042076fac39fde5da0233cd42bce04915635e2933cab88.scope.
Dec  3 18:34:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:34:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/398695489' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:34:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:34:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/398695489' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:34:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcbd31e2d6f7b27077181cd0e1d8cf12bc4627f904f4667425e3e8e8e227354/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcbd31e2d6f7b27077181cd0e1d8cf12bc4627f904f4667425e3e8e8e227354/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcbd31e2d6f7b27077181cd0e1d8cf12bc4627f904f4667425e3e8e8e227354/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcbd31e2d6f7b27077181cd0e1d8cf12bc4627f904f4667425e3e8e8e227354/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcbd31e2d6f7b27077181cd0e1d8cf12bc4627f904f4667425e3e8e8e227354/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:37 compute-0 podman[406238]: 2025-12-03 18:34:37.663085776 +0000 UTC m=+0.246857759 container init 10618a4fa24c7caf3c042076fac39fde5da0233cd42bce04915635e2933cab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:34:37 compute-0 podman[406238]: 2025-12-03 18:34:37.693218221 +0000 UTC m=+0.276990144 container start 10618a4fa24c7caf3c042076fac39fde5da0233cd42bce04915635e2933cab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:34:37 compute-0 podman[406238]: 2025-12-03 18:34:37.702116868 +0000 UTC m=+0.285888851 container attach 10618a4fa24c7caf3c042076fac39fde5da0233cd42bce04915635e2933cab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:34:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:38 compute-0 podman[406276]: 2025-12-03 18:34:38.977612686 +0000 UTC m=+0.130589385 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 18:34:39 compute-0 magical_chatelet[406254]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:34:39 compute-0 magical_chatelet[406254]: --> relative data size: 1.0
Dec  3 18:34:39 compute-0 magical_chatelet[406254]: --> All data devices are unavailable
Dec  3 18:34:39 compute-0 systemd[1]: libpod-10618a4fa24c7caf3c042076fac39fde5da0233cd42bce04915635e2933cab88.scope: Deactivated successfully.
Dec  3 18:34:39 compute-0 systemd[1]: libpod-10618a4fa24c7caf3c042076fac39fde5da0233cd42bce04915635e2933cab88.scope: Consumed 1.288s CPU time.
Dec  3 18:34:39 compute-0 podman[406238]: 2025-12-03 18:34:39.05158838 +0000 UTC m=+1.635360383 container died 10618a4fa24c7caf3c042076fac39fde5da0233cd42bce04915635e2933cab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddcbd31e2d6f7b27077181cd0e1d8cf12bc4627f904f4667425e3e8e8e227354-merged.mount: Deactivated successfully.
Dec  3 18:34:39 compute-0 podman[406238]: 2025-12-03 18:34:39.150722057 +0000 UTC m=+1.734493970 container remove 10618a4fa24c7caf3c042076fac39fde5da0233cd42bce04915635e2933cab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:34:39 compute-0 systemd[1]: libpod-conmon-10618a4fa24c7caf3c042076fac39fde5da0233cd42bce04915635e2933cab88.scope: Deactivated successfully.
Dec  3 18:34:40 compute-0 podman[406451]: 2025-12-03 18:34:40.298720717 +0000 UTC m=+0.104626562 container create ae29146a937a1ba9a0ae61e955c7d985300b0024084219d407b457b46b99bcca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:34:40 compute-0 podman[406451]: 2025-12-03 18:34:40.255199776 +0000 UTC m=+0.061105691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:34:40 compute-0 systemd[1]: Started libpod-conmon-ae29146a937a1ba9a0ae61e955c7d985300b0024084219d407b457b46b99bcca.scope.
Dec  3 18:34:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:34:40 compute-0 podman[406451]: 2025-12-03 18:34:40.461020954 +0000 UTC m=+0.266926809 container init ae29146a937a1ba9a0ae61e955c7d985300b0024084219d407b457b46b99bcca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:34:40 compute-0 podman[406451]: 2025-12-03 18:34:40.477219968 +0000 UTC m=+0.283125813 container start ae29146a937a1ba9a0ae61e955c7d985300b0024084219d407b457b46b99bcca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:34:40 compute-0 podman[406451]: 2025-12-03 18:34:40.483990504 +0000 UTC m=+0.289896409 container attach ae29146a937a1ba9a0ae61e955c7d985300b0024084219d407b457b46b99bcca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:34:40 compute-0 practical_mayer[406467]: 167 167
Dec  3 18:34:40 compute-0 systemd[1]: libpod-ae29146a937a1ba9a0ae61e955c7d985300b0024084219d407b457b46b99bcca.scope: Deactivated successfully.
Dec  3 18:34:40 compute-0 podman[406451]: 2025-12-03 18:34:40.489368415 +0000 UTC m=+0.295274260 container died ae29146a937a1ba9a0ae61e955c7d985300b0024084219d407b457b46b99bcca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8601be476bb7b4207c8f70d453e9752bd040d7cdbb5e99644a5d41422468940e-merged.mount: Deactivated successfully.
Dec  3 18:34:40 compute-0 podman[406451]: 2025-12-03 18:34:40.567617783 +0000 UTC m=+0.373523598 container remove ae29146a937a1ba9a0ae61e955c7d985300b0024084219d407b457b46b99bcca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:34:40 compute-0 systemd[1]: libpod-conmon-ae29146a937a1ba9a0ae61e955c7d985300b0024084219d407b457b46b99bcca.scope: Deactivated successfully.
Dec  3 18:34:40 compute-0 podman[406488]: 2025-12-03 18:34:40.838012135 +0000 UTC m=+0.075813559 container create 05f50282fa146b235f5efabb5a92c27ced1acfa4e88c6eb9a8c4ff17c71364d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bose, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:34:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:40 compute-0 podman[406488]: 2025-12-03 18:34:40.817503755 +0000 UTC m=+0.055305209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:34:40 compute-0 systemd[1]: Started libpod-conmon-05f50282fa146b235f5efabb5a92c27ced1acfa4e88c6eb9a8c4ff17c71364d7.scope.
Dec  3 18:34:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8783dc35096780de2265ec01944c2bbf851318120ce8b51aee24743e0cb9e7b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8783dc35096780de2265ec01944c2bbf851318120ce8b51aee24743e0cb9e7b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8783dc35096780de2265ec01944c2bbf851318120ce8b51aee24743e0cb9e7b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8783dc35096780de2265ec01944c2bbf851318120ce8b51aee24743e0cb9e7b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:41 compute-0 podman[406488]: 2025-12-03 18:34:41.022022121 +0000 UTC m=+0.259823575 container init 05f50282fa146b235f5efabb5a92c27ced1acfa4e88c6eb9a8c4ff17c71364d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bose, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:34:41 compute-0 podman[406488]: 2025-12-03 18:34:41.07076517 +0000 UTC m=+0.308566604 container start 05f50282fa146b235f5efabb5a92c27ced1acfa4e88c6eb9a8c4ff17c71364d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:34:41 compute-0 podman[406488]: 2025-12-03 18:34:41.078215701 +0000 UTC m=+0.316017165 container attach 05f50282fa146b235f5efabb5a92c27ced1acfa4e88c6eb9a8c4ff17c71364d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]: {
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:    "0": [
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:        {
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "devices": [
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "/dev/loop3"
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            ],
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_name": "ceph_lv0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_size": "21470642176",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "name": "ceph_lv0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "tags": {
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.cluster_name": "ceph",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.crush_device_class": "",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.encrypted": "0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.osd_id": "0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.type": "block",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.vdo": "0"
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            },
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "type": "block",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "vg_name": "ceph_vg0"
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:        }
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:    ],
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:    "1": [
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:        {
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "devices": [
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "/dev/loop4"
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            ],
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_name": "ceph_lv1",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_size": "21470642176",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "name": "ceph_lv1",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "tags": {
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.cluster_name": "ceph",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.crush_device_class": "",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.encrypted": "0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.osd_id": "1",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.type": "block",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.vdo": "0"
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            },
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "type": "block",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "vg_name": "ceph_vg1"
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:        }
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:    ],
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:    "2": [
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:        {
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "devices": [
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "/dev/loop5"
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            ],
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_name": "ceph_lv2",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_size": "21470642176",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "name": "ceph_lv2",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "tags": {
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.cluster_name": "ceph",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.crush_device_class": "",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.encrypted": "0",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.osd_id": "2",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.type": "block",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:                "ceph.vdo": "0"
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            },
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "type": "block",
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:            "vg_name": "ceph_vg2"
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:        }
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]:    ]
Dec  3 18:34:41 compute-0 flamboyant_bose[406504]: }
Dec  3 18:34:41 compute-0 podman[406488]: 2025-12-03 18:34:41.92675841 +0000 UTC m=+1.164559844 container died 05f50282fa146b235f5efabb5a92c27ced1acfa4e88c6eb9a8c4ff17c71364d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:34:41 compute-0 systemd[1]: libpod-05f50282fa146b235f5efabb5a92c27ced1acfa4e88c6eb9a8c4ff17c71364d7.scope: Deactivated successfully.
Dec  3 18:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8783dc35096780de2265ec01944c2bbf851318120ce8b51aee24743e0cb9e7b8-merged.mount: Deactivated successfully.
Dec  3 18:34:41 compute-0 podman[406488]: 2025-12-03 18:34:41.994314507 +0000 UTC m=+1.232115931 container remove 05f50282fa146b235f5efabb5a92c27ced1acfa4e88c6eb9a8c4ff17c71364d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:34:42 compute-0 systemd[1]: libpod-conmon-05f50282fa146b235f5efabb5a92c27ced1acfa4e88c6eb9a8c4ff17c71364d7.scope: Deactivated successfully.
Dec  3 18:34:42 compute-0 podman[406523]: 2025-12-03 18:34:42.124332028 +0000 UTC m=+0.089691909 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, com.redhat.component=ubi9-container)
Dec  3 18:34:42 compute-0 podman[406524]: 2025-12-03 18:34:42.146833946 +0000 UTC m=+0.112486084 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec  3 18:34:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:42 compute-0 podman[406702]: 2025-12-03 18:34:42.956314622 +0000 UTC m=+0.087477054 container create e65665b7505e4d9a061a8ab80f360836b32ee37216d796477fae9095cc887485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_northcutt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:34:43 compute-0 podman[406702]: 2025-12-03 18:34:42.929039107 +0000 UTC m=+0.060201549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:34:43 compute-0 systemd[1]: Started libpod-conmon-e65665b7505e4d9a061a8ab80f360836b32ee37216d796477fae9095cc887485.scope.
Dec  3 18:34:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:34:43 compute-0 podman[406702]: 2025-12-03 18:34:43.095092096 +0000 UTC m=+0.226254568 container init e65665b7505e4d9a061a8ab80f360836b32ee37216d796477fae9095cc887485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:34:43 compute-0 podman[406702]: 2025-12-03 18:34:43.117227636 +0000 UTC m=+0.248390068 container start e65665b7505e4d9a061a8ab80f360836b32ee37216d796477fae9095cc887485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:34:43 compute-0 podman[406702]: 2025-12-03 18:34:43.123866917 +0000 UTC m=+0.255029349 container attach e65665b7505e4d9a061a8ab80f360836b32ee37216d796477fae9095cc887485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_northcutt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:34:43 compute-0 zealous_northcutt[406718]: 167 167
Dec  3 18:34:43 compute-0 systemd[1]: libpod-e65665b7505e4d9a061a8ab80f360836b32ee37216d796477fae9095cc887485.scope: Deactivated successfully.
Dec  3 18:34:43 compute-0 podman[406702]: 2025-12-03 18:34:43.128064249 +0000 UTC m=+0.259226681 container died e65665b7505e4d9a061a8ab80f360836b32ee37216d796477fae9095cc887485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c38dc8f4c5509923cdb6550add2e7aa9524e2012b2d0bfad8d5e985efae4bf6-merged.mount: Deactivated successfully.
Dec  3 18:34:43 compute-0 podman[406702]: 2025-12-03 18:34:43.212270523 +0000 UTC m=+0.343432945 container remove e65665b7505e4d9a061a8ab80f360836b32ee37216d796477fae9095cc887485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:34:43 compute-0 systemd[1]: libpod-conmon-e65665b7505e4d9a061a8ab80f360836b32ee37216d796477fae9095cc887485.scope: Deactivated successfully.
Dec  3 18:34:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:43 compute-0 podman[406741]: 2025-12-03 18:34:43.576150364 +0000 UTC m=+0.097961840 container create 30c4feb125c76fb16cd4db7e7a0b3efe021c7c66bb5007d509f2444196c8a428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:34:43 compute-0 podman[406741]: 2025-12-03 18:34:43.539822358 +0000 UTC m=+0.061633884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:34:43 compute-0 systemd[1]: Started libpod-conmon-30c4feb125c76fb16cd4db7e7a0b3efe021c7c66bb5007d509f2444196c8a428.scope.
Dec  3 18:34:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d32582e31c3e24b135dfcfe0e0ecc38af5a7a188a0542fdf2c894a1d79a92a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d32582e31c3e24b135dfcfe0e0ecc38af5a7a188a0542fdf2c894a1d79a92a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d32582e31c3e24b135dfcfe0e0ecc38af5a7a188a0542fdf2c894a1d79a92a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d32582e31c3e24b135dfcfe0e0ecc38af5a7a188a0542fdf2c894a1d79a92a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:34:43 compute-0 podman[406741]: 2025-12-03 18:34:43.776192301 +0000 UTC m=+0.298003827 container init 30c4feb125c76fb16cd4db7e7a0b3efe021c7c66bb5007d509f2444196c8a428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 18:34:43 compute-0 podman[406741]: 2025-12-03 18:34:43.793681998 +0000 UTC m=+0.315493464 container start 30c4feb125c76fb16cd4db7e7a0b3efe021c7c66bb5007d509f2444196c8a428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:34:43 compute-0 podman[406741]: 2025-12-03 18:34:43.800679599 +0000 UTC m=+0.322491055 container attach 30c4feb125c76fb16cd4db7e7a0b3efe021c7c66bb5007d509f2444196c8a428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:34:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:34:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:34:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:34:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:34:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:34:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:34:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:44 compute-0 competent_maxwell[406756]: {
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "osd_id": 1,
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "type": "bluestore"
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:    },
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "osd_id": 2,
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "type": "bluestore"
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:    },
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "osd_id": 0,
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:        "type": "bluestore"
Dec  3 18:34:44 compute-0 competent_maxwell[406756]:    }
Dec  3 18:34:44 compute-0 competent_maxwell[406756]: }
Dec  3 18:34:44 compute-0 systemd[1]: libpod-30c4feb125c76fb16cd4db7e7a0b3efe021c7c66bb5007d509f2444196c8a428.scope: Deactivated successfully.
Dec  3 18:34:44 compute-0 podman[406741]: 2025-12-03 18:34:44.988237773 +0000 UTC m=+1.510049249 container died 30c4feb125c76fb16cd4db7e7a0b3efe021c7c66bb5007d509f2444196c8a428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:34:44 compute-0 systemd[1]: libpod-30c4feb125c76fb16cd4db7e7a0b3efe021c7c66bb5007d509f2444196c8a428.scope: Consumed 1.198s CPU time.
Dec  3 18:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-00d32582e31c3e24b135dfcfe0e0ecc38af5a7a188a0542fdf2c894a1d79a92a-merged.mount: Deactivated successfully.
Dec  3 18:34:45 compute-0 podman[406741]: 2025-12-03 18:34:45.091246735 +0000 UTC m=+1.613058181 container remove 30c4feb125c76fb16cd4db7e7a0b3efe021c7c66bb5007d509f2444196c8a428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:34:45 compute-0 systemd[1]: libpod-conmon-30c4feb125c76fb16cd4db7e7a0b3efe021c7c66bb5007d509f2444196c8a428.scope: Deactivated successfully.
Dec  3 18:34:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:34:45 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:34:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:34:45 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:34:45 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f5a8f8f7-d26e-4c3b-b48d-9d3dfb6252d9 does not exist
Dec  3 18:34:45 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9a4317f6-ccb2-4498-b908-fe6e3979c7fb does not exist
Dec  3 18:34:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:34:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:34:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:49 compute-0 podman[406849]: 2025-12-03 18:34:49.000585739 +0000 UTC m=+0.158376302 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:34:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:53 compute-0 podman[406872]: 2025-12-03 18:34:53.983828337 +0000 UTC m=+0.137303273 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:34:54 compute-0 podman[406871]: 2025-12-03 18:34:54.01226291 +0000 UTC m=+0.182679689 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 18:34:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:34:58 compute-0 nova_compute[348325]: 2025-12-03 18:34:58.647 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:34:58 compute-0 nova_compute[348325]: 2025-12-03 18:34:58.648 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:34:58 compute-0 nova_compute[348325]: 2025-12-03 18:34:58.648 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:34:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:34:59 compute-0 nova_compute[348325]: 2025-12-03 18:34:59.478 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:34:59 compute-0 nova_compute[348325]: 2025-12-03 18:34:59.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:34:59 compute-0 nova_compute[348325]: 2025-12-03 18:34:59.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:34:59 compute-0 nova_compute[348325]: 2025-12-03 18:34:59.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:34:59 compute-0 nova_compute[348325]: 2025-12-03 18:34:59.515 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 18:34:59 compute-0 podman[158200]: time="2025-12-03T18:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:34:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:34:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8127 "" "Go-http-client/1.1"
Dec  3 18:35:00 compute-0 nova_compute[348325]: 2025-12-03 18:35:00.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:35:00 compute-0 nova_compute[348325]: 2025-12-03 18:35:00.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:35:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:01 compute-0 openstack_network_exporter[365222]: ERROR   18:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:35:01 compute-0 openstack_network_exporter[365222]: ERROR   18:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:35:01 compute-0 openstack_network_exporter[365222]: ERROR   18:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:35:01 compute-0 openstack_network_exporter[365222]: ERROR   18:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:35:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:35:01 compute-0 openstack_network_exporter[365222]: ERROR   18:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:35:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:35:02 compute-0 nova_compute[348325]: 2025-12-03 18:35:02.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:35:02 compute-0 nova_compute[348325]: 2025-12-03 18:35:02.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:35:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:04 compute-0 nova_compute[348325]: 2025-12-03 18:35:04.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:35:04 compute-0 nova_compute[348325]: 2025-12-03 18:35:04.515 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:35:04 compute-0 nova_compute[348325]: 2025-12-03 18:35:04.515 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:35:04 compute-0 nova_compute[348325]: 2025-12-03 18:35:04.516 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:35:04 compute-0 nova_compute[348325]: 2025-12-03 18:35:04.516 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:35:04 compute-0 nova_compute[348325]: 2025-12-03 18:35:04.517 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:35:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:35:04 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584446522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:35:04 compute-0 nova_compute[348325]: 2025-12-03 18:35:04.988 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:35:05 compute-0 nova_compute[348325]: 2025-12-03 18:35:05.377 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:35:05 compute-0 nova_compute[348325]: 2025-12-03 18:35:05.378 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4570MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:35:05 compute-0 nova_compute[348325]: 2025-12-03 18:35:05.378 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:35:05 compute-0 nova_compute[348325]: 2025-12-03 18:35:05.379 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:35:05 compute-0 nova_compute[348325]: 2025-12-03 18:35:05.445 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:35:05 compute-0 nova_compute[348325]: 2025-12-03 18:35:05.445 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:35:05 compute-0 nova_compute[348325]: 2025-12-03 18:35:05.462 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:35:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:35:05 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344992517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:35:05 compute-0 nova_compute[348325]: 2025-12-03 18:35:05.998 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:35:06 compute-0 nova_compute[348325]: 2025-12-03 18:35:06.006 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:35:06 compute-0 nova_compute[348325]: 2025-12-03 18:35:06.024 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:35:06 compute-0 nova_compute[348325]: 2025-12-03 18:35:06.026 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:35:06 compute-0 nova_compute[348325]: 2025-12-03 18:35:06.027 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:35:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:06 compute-0 podman[406959]: 2025-12-03 18:35:06.940799835 +0000 UTC m=+0.098143997 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 18:35:06 compute-0 podman[406960]: 2025-12-03 18:35:06.957911922 +0000 UTC m=+0.116817622 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:35:07 compute-0 podman[406961]: 2025-12-03 18:35:07.008175649 +0000 UTC m=+0.155192339 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, config_id=edpm)
Dec  3 18:35:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec  3 18:35:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec  3 18:35:08 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec  3 18:35:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec  3 18:35:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec  3 18:35:09 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec  3 18:35:09 compute-0 podman[407017]: 2025-12-03 18:35:09.915290356 +0000 UTC m=+0.082281759 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:35:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec  3 18:35:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec  3 18:35:10 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec  3 18:35:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 8.4 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 1.3 MiB/s wr, 1 op/s
Dec  3 18:35:12 compute-0 podman[407035]: 2025-12-03 18:35:12.901038133 +0000 UTC m=+0.071811134 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=)
Dec  3 18:35:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 16 MiB data, 156 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s
Dec  3 18:35:12 compute-0 podman[407036]: 2025-12-03 18:35:12.922742212 +0000 UTC m=+0.087354242 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.245 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.246 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.246 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.247 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.248 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.250 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.255 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.255 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.256 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.256 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.257 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.257 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.257 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.260 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.263 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.265 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.265 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.268 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.268 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.268 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.269 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.269 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.269 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.270 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:35:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:35:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec  3 18:35:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec  3 18:35:13 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec  3 18:35:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:35:13
Dec  3 18:35:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:35:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:35:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'volumes', 'default.rgw.meta', 'vms']
Dec  3 18:35:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:35:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:35:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:35:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:35:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:35:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:35:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:35:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:35:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:35:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:35:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:35:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:35:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:35:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:35:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:35:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:35:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:35:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s
Dec  3 18:35:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 2.0 MiB/s wr, 14 op/s
Dec  3 18:35:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 876 KiB/s wr, 11 op/s
Dec  3 18:35:19 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:35:19.602 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:35:19 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:35:19.603 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:35:19 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:35:19.608 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:35:20 compute-0 podman[407073]: 2025-12-03 18:35:20.007078067 +0000 UTC m=+0.164772663 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:35:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 774 KiB/s wr, 10 op/s
Dec  3 18:35:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Dec  3 18:35:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:35:23.328 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:35:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:35:23.329 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:35:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:35:23.329 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:35:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:35:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 88 B/s wr, 0 op/s
Dec  3 18:35:24 compute-0 podman[407097]: 2025-12-03 18:35:24.958877562 +0000 UTC m=+0.109360362 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:35:24 compute-0 podman[407096]: 2025-12-03 18:35:24.982648812 +0000 UTC m=+0.151541221 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  3 18:35:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:29 compute-0 podman[158200]: time="2025-12-03T18:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:35:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:35:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8131 "" "Go-http-client/1.1"
Dec  3 18:35:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:31 compute-0 openstack_network_exporter[365222]: ERROR   18:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:35:31 compute-0 openstack_network_exporter[365222]: ERROR   18:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:35:31 compute-0 openstack_network_exporter[365222]: ERROR   18:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:35:31 compute-0 openstack_network_exporter[365222]: ERROR   18:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:35:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:35:31 compute-0 openstack_network_exporter[365222]: ERROR   18:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:35:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:35:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:35:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3173843790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:35:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:35:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3173843790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:35:37 compute-0 podman[407140]: 2025-12-03 18:35:37.960222153 +0000 UTC m=+0.113723777 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:35:37 compute-0 podman[407141]: 2025-12-03 18:35:37.995754631 +0000 UTC m=+0.130120937 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, io.buildah.version=1.33.7, distribution-scope=public)
Dec  3 18:35:38 compute-0 podman[407139]: 2025-12-03 18:35:38.000540947 +0000 UTC m=+0.152159725 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 18:35:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:40 compute-0 podman[407205]: 2025-12-03 18:35:40.945198072 +0000 UTC m=+0.108298414 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec  3 18:35:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:43 compute-0 podman[407225]: 2025-12-03 18:35:43.08817574 +0000 UTC m=+0.133310505 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, build-date=2024-09-18T21:23:30)
Dec  3 18:35:43 compute-0 podman[407245]: 2025-12-03 18:35:43.221257738 +0000 UTC m=+0.099607662 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:35:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:35:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:35:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:35:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:35:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:35:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:35:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:35:46 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:35:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:35:46 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:35:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:47 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:35:47 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:35:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:48 compute-0 podman[407653]: 2025-12-03 18:35:48.572306786 +0000 UTC m=+0.068501233 container create 328f90b615c0442ed3b8174a3ffd046ecbc426d042c8ad74a8fae22e29ea01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gates, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:35:48 compute-0 podman[407653]: 2025-12-03 18:35:48.534056002 +0000 UTC m=+0.030250499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:35:48 compute-0 systemd[1]: Started libpod-conmon-328f90b615c0442ed3b8174a3ffd046ecbc426d042c8ad74a8fae22e29ea01c5.scope.
Dec  3 18:35:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:35:48 compute-0 podman[407653]: 2025-12-03 18:35:48.689891566 +0000 UTC m=+0.186086073 container init 328f90b615c0442ed3b8174a3ffd046ecbc426d042c8ad74a8fae22e29ea01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:35:48 compute-0 podman[407653]: 2025-12-03 18:35:48.706885581 +0000 UTC m=+0.203079998 container start 328f90b615c0442ed3b8174a3ffd046ecbc426d042c8ad74a8fae22e29ea01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gates, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:35:48 compute-0 podman[407653]: 2025-12-03 18:35:48.711565885 +0000 UTC m=+0.207760342 container attach 328f90b615c0442ed3b8174a3ffd046ecbc426d042c8ad74a8fae22e29ea01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gates, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:35:48 compute-0 jolly_gates[407668]: 167 167
Dec  3 18:35:48 compute-0 systemd[1]: libpod-328f90b615c0442ed3b8174a3ffd046ecbc426d042c8ad74a8fae22e29ea01c5.scope: Deactivated successfully.
Dec  3 18:35:48 compute-0 conmon[407668]: conmon 328f90b615c0442ed3b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-328f90b615c0442ed3b8174a3ffd046ecbc426d042c8ad74a8fae22e29ea01c5.scope/container/memory.events
Dec  3 18:35:48 compute-0 podman[407653]: 2025-12-03 18:35:48.718499714 +0000 UTC m=+0.214694171 container died 328f90b615c0442ed3b8174a3ffd046ecbc426d042c8ad74a8fae22e29ea01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-680bc89bdea4d5e7d670babfb29aaf334b74af7d2e5f1868cf2db0f3a83a23d9-merged.mount: Deactivated successfully.
Dec  3 18:35:48 compute-0 podman[407653]: 2025-12-03 18:35:48.770299918 +0000 UTC m=+0.266494335 container remove 328f90b615c0442ed3b8174a3ffd046ecbc426d042c8ad74a8fae22e29ea01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gates, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:35:48 compute-0 systemd[1]: libpod-conmon-328f90b615c0442ed3b8174a3ffd046ecbc426d042c8ad74a8fae22e29ea01c5.scope: Deactivated successfully.
Dec  3 18:35:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:48 compute-0 podman[407692]: 2025-12-03 18:35:48.993613988 +0000 UTC m=+0.075077052 container create c7eed632a87ec1f60be0f61117dc8a8f48a473d667d85ae629448032f5d6cf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 18:35:49 compute-0 podman[407692]: 2025-12-03 18:35:48.970580117 +0000 UTC m=+0.052043181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:35:49 compute-0 systemd[1]: Started libpod-conmon-c7eed632a87ec1f60be0f61117dc8a8f48a473d667d85ae629448032f5d6cf71.scope.
Dec  3 18:35:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb39ac1c36a6412a9ce07f58b0da7992e96c246b882abdc4a4ad62dcbdc7513/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb39ac1c36a6412a9ce07f58b0da7992e96c246b882abdc4a4ad62dcbdc7513/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb39ac1c36a6412a9ce07f58b0da7992e96c246b882abdc4a4ad62dcbdc7513/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb39ac1c36a6412a9ce07f58b0da7992e96c246b882abdc4a4ad62dcbdc7513/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:49 compute-0 podman[407692]: 2025-12-03 18:35:49.130603362 +0000 UTC m=+0.212066476 container init c7eed632a87ec1f60be0f61117dc8a8f48a473d667d85ae629448032f5d6cf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:35:49 compute-0 podman[407692]: 2025-12-03 18:35:49.155595162 +0000 UTC m=+0.237058226 container start c7eed632a87ec1f60be0f61117dc8a8f48a473d667d85ae629448032f5d6cf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 18:35:49 compute-0 podman[407692]: 2025-12-03 18:35:49.161118637 +0000 UTC m=+0.242581751 container attach c7eed632a87ec1f60be0f61117dc8a8f48a473d667d85ae629448032f5d6cf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_spence, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:35:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:50 compute-0 podman[408325]: 2025-12-03 18:35:50.957968764 +0000 UTC m=+0.123475814 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:35:51 compute-0 kind_spence[407709]: [
Dec  3 18:35:51 compute-0 kind_spence[407709]:    {
Dec  3 18:35:51 compute-0 kind_spence[407709]:        "available": false,
Dec  3 18:35:51 compute-0 kind_spence[407709]:        "ceph_device": false,
Dec  3 18:35:51 compute-0 kind_spence[407709]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  3 18:35:51 compute-0 kind_spence[407709]:        "lsm_data": {},
Dec  3 18:35:51 compute-0 kind_spence[407709]:        "lvs": [],
Dec  3 18:35:51 compute-0 kind_spence[407709]:        "path": "/dev/sr0",
Dec  3 18:35:51 compute-0 kind_spence[407709]:        "rejected_reasons": [
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "Has a FileSystem",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "Insufficient space (<5GB)"
Dec  3 18:35:51 compute-0 kind_spence[407709]:        ],
Dec  3 18:35:51 compute-0 kind_spence[407709]:        "sys_api": {
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "actuators": null,
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "device_nodes": "sr0",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "devname": "sr0",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "human_readable_size": "482.00 KB",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "id_bus": "ata",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "model": "QEMU DVD-ROM",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "nr_requests": "2",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "parent": "/dev/sr0",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "partitions": {},
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "path": "/dev/sr0",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "removable": "1",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "rev": "2.5+",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "ro": "0",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "rotational": "1",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "sas_address": "",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "sas_device_handle": "",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "scheduler_mode": "mq-deadline",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "sectors": 0,
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "sectorsize": "2048",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "size": 493568.0,
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "support_discard": "2048",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "type": "disk",
Dec  3 18:35:51 compute-0 kind_spence[407709]:            "vendor": "QEMU"
Dec  3 18:35:51 compute-0 kind_spence[407709]:        }
Dec  3 18:35:51 compute-0 kind_spence[407709]:    }
Dec  3 18:35:51 compute-0 kind_spence[407709]: ]
Dec  3 18:35:51 compute-0 systemd[1]: libpod-c7eed632a87ec1f60be0f61117dc8a8f48a473d667d85ae629448032f5d6cf71.scope: Deactivated successfully.
Dec  3 18:35:51 compute-0 systemd[1]: libpod-c7eed632a87ec1f60be0f61117dc8a8f48a473d667d85ae629448032f5d6cf71.scope: Consumed 2.770s CPU time.
Dec  3 18:35:51 compute-0 conmon[407709]: conmon c7eed632a87ec1f60be0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7eed632a87ec1f60be0f61117dc8a8f48a473d667d85ae629448032f5d6cf71.scope/container/memory.events
Dec  3 18:35:51 compute-0 podman[407692]: 2025-12-03 18:35:51.769425401 +0000 UTC m=+2.850888535 container died c7eed632a87ec1f60be0f61117dc8a8f48a473d667d85ae629448032f5d6cf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_spence, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eb39ac1c36a6412a9ce07f58b0da7992e96c246b882abdc4a4ad62dcbdc7513-merged.mount: Deactivated successfully.
Dec  3 18:35:51 compute-0 podman[407692]: 2025-12-03 18:35:51.872520348 +0000 UTC m=+2.953983422 container remove c7eed632a87ec1f60be0f61117dc8a8f48a473d667d85ae629448032f5d6cf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_spence, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 18:35:51 compute-0 systemd[1]: libpod-conmon-c7eed632a87ec1f60be0f61117dc8a8f48a473d667d85ae629448032f5d6cf71.scope: Deactivated successfully.
Dec  3 18:35:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:35:51 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:35:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:35:51 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:35:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:35:51 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:35:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:35:51 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:35:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:35:51 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:35:51 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 7862eb9c-8f2e-4fa7-baba-ba8131d1e74d does not exist
Dec  3 18:35:51 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 0afd009e-29f2-43a0-9e3e-ea1947476f4d does not exist
Dec  3 18:35:51 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev dfc115ec-fa57-4cee-8bd9-ac168632ec53 does not exist
Dec  3 18:35:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:35:51 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:35:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:35:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:35:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:35:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:35:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:35:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:35:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:35:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:35:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:53 compute-0 podman[410295]: 2025-12-03 18:35:53.027075278 +0000 UTC m=+0.069226481 container create 2ae8642f49c23182d2ab44c766a806caabf6a9994a4e6697f93238d6eb006218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:35:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:35:53 compute-0 podman[410295]: 2025-12-03 18:35:52.993611301 +0000 UTC m=+0.035762554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:35:53 compute-0 systemd[1]: Started libpod-conmon-2ae8642f49c23182d2ab44c766a806caabf6a9994a4e6697f93238d6eb006218.scope.
Dec  3 18:35:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:35:53 compute-0 podman[410295]: 2025-12-03 18:35:53.160903115 +0000 UTC m=+0.203054358 container init 2ae8642f49c23182d2ab44c766a806caabf6a9994a4e6697f93238d6eb006218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhaskara, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:35:53 compute-0 podman[410295]: 2025-12-03 18:35:53.173146074 +0000 UTC m=+0.215297237 container start 2ae8642f49c23182d2ab44c766a806caabf6a9994a4e6697f93238d6eb006218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhaskara, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:35:53 compute-0 podman[410295]: 2025-12-03 18:35:53.178649807 +0000 UTC m=+0.220801120 container attach 2ae8642f49c23182d2ab44c766a806caabf6a9994a4e6697f93238d6eb006218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:35:53 compute-0 mystifying_bhaskara[410310]: 167 167
Dec  3 18:35:53 compute-0 systemd[1]: libpod-2ae8642f49c23182d2ab44c766a806caabf6a9994a4e6697f93238d6eb006218.scope: Deactivated successfully.
Dec  3 18:35:53 compute-0 podman[410295]: 2025-12-03 18:35:53.182773538 +0000 UTC m=+0.224924771 container died 2ae8642f49c23182d2ab44c766a806caabf6a9994a4e6697f93238d6eb006218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhaskara, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f99cc21dec6907243dbcd6c330f70cb3dc3b87d67b0771bec1ed212ba180fb69-merged.mount: Deactivated successfully.
Dec  3 18:35:53 compute-0 podman[410295]: 2025-12-03 18:35:53.24143061 +0000 UTC m=+0.283581763 container remove 2ae8642f49c23182d2ab44c766a806caabf6a9994a4e6697f93238d6eb006218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhaskara, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:35:53 compute-0 systemd[1]: libpod-conmon-2ae8642f49c23182d2ab44c766a806caabf6a9994a4e6697f93238d6eb006218.scope: Deactivated successfully.
Dec  3 18:35:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:53 compute-0 podman[410333]: 2025-12-03 18:35:53.482537995 +0000 UTC m=+0.070295777 container create eb05820374fa596a2b07e406fbdb8d5cd4190d473e3b31c50bf7a15f40a05153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:35:53 compute-0 podman[410333]: 2025-12-03 18:35:53.453256231 +0000 UTC m=+0.041014103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:35:53 compute-0 systemd[1]: Started libpod-conmon-eb05820374fa596a2b07e406fbdb8d5cd4190d473e3b31c50bf7a15f40a05153.scope.
Dec  3 18:35:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518816077fc2318085c4f4626c394c372e518fdc7ed5ffed25498e2c90fac579/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518816077fc2318085c4f4626c394c372e518fdc7ed5ffed25498e2c90fac579/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518816077fc2318085c4f4626c394c372e518fdc7ed5ffed25498e2c90fac579/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518816077fc2318085c4f4626c394c372e518fdc7ed5ffed25498e2c90fac579/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518816077fc2318085c4f4626c394c372e518fdc7ed5ffed25498e2c90fac579/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:53 compute-0 podman[410333]: 2025-12-03 18:35:53.616542345 +0000 UTC m=+0.204300147 container init eb05820374fa596a2b07e406fbdb8d5cd4190d473e3b31c50bf7a15f40a05153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:35:53 compute-0 podman[410333]: 2025-12-03 18:35:53.625852233 +0000 UTC m=+0.213610015 container start eb05820374fa596a2b07e406fbdb8d5cd4190d473e3b31c50bf7a15f40a05153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carver, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:35:53 compute-0 podman[410333]: 2025-12-03 18:35:53.630112927 +0000 UTC m=+0.217870799 container attach eb05820374fa596a2b07e406fbdb8d5cd4190d473e3b31c50bf7a15f40a05153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carver, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:35:54 compute-0 condescending_carver[410349]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:35:54 compute-0 condescending_carver[410349]: --> relative data size: 1.0
Dec  3 18:35:54 compute-0 condescending_carver[410349]: --> All data devices are unavailable
Dec  3 18:35:54 compute-0 systemd[1]: libpod-eb05820374fa596a2b07e406fbdb8d5cd4190d473e3b31c50bf7a15f40a05153.scope: Deactivated successfully.
Dec  3 18:35:54 compute-0 systemd[1]: libpod-eb05820374fa596a2b07e406fbdb8d5cd4190d473e3b31c50bf7a15f40a05153.scope: Consumed 1.168s CPU time.
Dec  3 18:35:54 compute-0 podman[410333]: 2025-12-03 18:35:54.873416714 +0000 UTC m=+1.461174536 container died eb05820374fa596a2b07e406fbdb8d5cd4190d473e3b31c50bf7a15f40a05153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-518816077fc2318085c4f4626c394c372e518fdc7ed5ffed25498e2c90fac579-merged.mount: Deactivated successfully.
Dec  3 18:35:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:54 compute-0 podman[410333]: 2025-12-03 18:35:54.968839953 +0000 UTC m=+1.556597775 container remove eb05820374fa596a2b07e406fbdb8d5cd4190d473e3b31c50bf7a15f40a05153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:35:54 compute-0 systemd[1]: libpod-conmon-eb05820374fa596a2b07e406fbdb8d5cd4190d473e3b31c50bf7a15f40a05153.scope: Deactivated successfully.
Dec  3 18:35:55 compute-0 podman[410391]: 2025-12-03 18:35:55.118490816 +0000 UTC m=+0.090279045 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 18:35:55 compute-0 podman[410392]: 2025-12-03 18:35:55.174419541 +0000 UTC m=+0.145310438 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 18:35:55 compute-0 podman[410576]: 2025-12-03 18:35:55.893504352 +0000 UTC m=+0.073137416 container create 10cb9a5191a9ad4a221e3d8320bf69baf84b7ee0d75993d3d45ff163334ba05c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:35:55 compute-0 systemd[1]: Started libpod-conmon-10cb9a5191a9ad4a221e3d8320bf69baf84b7ee0d75993d3d45ff163334ba05c.scope.
Dec  3 18:35:55 compute-0 podman[410576]: 2025-12-03 18:35:55.870117682 +0000 UTC m=+0.049750786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:35:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:35:56 compute-0 podman[410576]: 2025-12-03 18:35:56.009562785 +0000 UTC m=+0.189195889 container init 10cb9a5191a9ad4a221e3d8320bf69baf84b7ee0d75993d3d45ff163334ba05c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:35:56 compute-0 podman[410576]: 2025-12-03 18:35:56.02942262 +0000 UTC m=+0.209055684 container start 10cb9a5191a9ad4a221e3d8320bf69baf84b7ee0d75993d3d45ff163334ba05c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:35:56 compute-0 podman[410576]: 2025-12-03 18:35:56.034644867 +0000 UTC m=+0.214277981 container attach 10cb9a5191a9ad4a221e3d8320bf69baf84b7ee0d75993d3d45ff163334ba05c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:35:56 compute-0 nice_bohr[410592]: 167 167
Dec  3 18:35:56 compute-0 systemd[1]: libpod-10cb9a5191a9ad4a221e3d8320bf69baf84b7ee0d75993d3d45ff163334ba05c.scope: Deactivated successfully.
Dec  3 18:35:56 compute-0 podman[410576]: 2025-12-03 18:35:56.039583468 +0000 UTC m=+0.219216582 container died 10cb9a5191a9ad4a221e3d8320bf69baf84b7ee0d75993d3d45ff163334ba05c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3da67dd51c911ea3323930af1529d0cf6cccc30687ca261d2bc73200fe31e3c4-merged.mount: Deactivated successfully.
Dec  3 18:35:56 compute-0 podman[410576]: 2025-12-03 18:35:56.095492133 +0000 UTC m=+0.275125197 container remove 10cb9a5191a9ad4a221e3d8320bf69baf84b7ee0d75993d3d45ff163334ba05c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:35:56 compute-0 systemd[1]: libpod-conmon-10cb9a5191a9ad4a221e3d8320bf69baf84b7ee0d75993d3d45ff163334ba05c.scope: Deactivated successfully.
Dec  3 18:35:56 compute-0 podman[410614]: 2025-12-03 18:35:56.329983836 +0000 UTC m=+0.064830823 container create f7f78f14c67b33186886d0120facaa41ccebf683afdde28f56f53706cf8fa561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:35:56 compute-0 systemd[1]: Started libpod-conmon-f7f78f14c67b33186886d0120facaa41ccebf683afdde28f56f53706cf8fa561.scope.
Dec  3 18:35:56 compute-0 podman[410614]: 2025-12-03 18:35:56.310050269 +0000 UTC m=+0.044897276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:35:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888f49df345d56ce837dcbd5a1de39bfb38f0a9e059a3f5859e727148aa6786e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888f49df345d56ce837dcbd5a1de39bfb38f0a9e059a3f5859e727148aa6786e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888f49df345d56ce837dcbd5a1de39bfb38f0a9e059a3f5859e727148aa6786e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888f49df345d56ce837dcbd5a1de39bfb38f0a9e059a3f5859e727148aa6786e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:56 compute-0 podman[410614]: 2025-12-03 18:35:56.451715997 +0000 UTC m=+0.186562984 container init f7f78f14c67b33186886d0120facaa41ccebf683afdde28f56f53706cf8fa561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:35:56 compute-0 podman[410614]: 2025-12-03 18:35:56.468149829 +0000 UTC m=+0.202996856 container start f7f78f14c67b33186886d0120facaa41ccebf683afdde28f56f53706cf8fa561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:35:56 compute-0 podman[410614]: 2025-12-03 18:35:56.480730826 +0000 UTC m=+0.215577833 container attach f7f78f14c67b33186886d0120facaa41ccebf683afdde28f56f53706cf8fa561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:35:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:57 compute-0 musing_bhabha[410630]: {
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:    "0": [
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:        {
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "devices": [
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "/dev/loop3"
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            ],
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_name": "ceph_lv0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_size": "21470642176",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "name": "ceph_lv0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "tags": {
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.cluster_name": "ceph",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.crush_device_class": "",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.encrypted": "0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.osd_id": "0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.type": "block",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.vdo": "0"
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            },
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "type": "block",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "vg_name": "ceph_vg0"
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:        }
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:    ],
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:    "1": [
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:        {
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "devices": [
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "/dev/loop4"
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            ],
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_name": "ceph_lv1",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_size": "21470642176",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "name": "ceph_lv1",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "tags": {
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.cluster_name": "ceph",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.crush_device_class": "",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.encrypted": "0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.osd_id": "1",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.type": "block",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.vdo": "0"
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            },
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "type": "block",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "vg_name": "ceph_vg1"
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:        }
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:    ],
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:    "2": [
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:        {
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "devices": [
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "/dev/loop5"
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            ],
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_name": "ceph_lv2",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_size": "21470642176",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "name": "ceph_lv2",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "tags": {
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.cluster_name": "ceph",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.crush_device_class": "",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.encrypted": "0",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.osd_id": "2",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.type": "block",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:                "ceph.vdo": "0"
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            },
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "type": "block",
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:            "vg_name": "ceph_vg2"
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:        }
Dec  3 18:35:57 compute-0 musing_bhabha[410630]:    ]
Dec  3 18:35:57 compute-0 musing_bhabha[410630]: }
Dec  3 18:35:57 compute-0 systemd[1]: libpod-f7f78f14c67b33186886d0120facaa41ccebf683afdde28f56f53706cf8fa561.scope: Deactivated successfully.
Dec  3 18:35:57 compute-0 podman[410614]: 2025-12-03 18:35:57.320213975 +0000 UTC m=+1.055061002 container died f7f78f14c67b33186886d0120facaa41ccebf683afdde28f56f53706cf8fa561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:35:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-888f49df345d56ce837dcbd5a1de39bfb38f0a9e059a3f5859e727148aa6786e-merged.mount: Deactivated successfully.
Dec  3 18:35:57 compute-0 podman[410614]: 2025-12-03 18:35:57.425015023 +0000 UTC m=+1.159862050 container remove f7f78f14c67b33186886d0120facaa41ccebf683afdde28f56f53706cf8fa561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:35:57 compute-0 systemd[1]: libpod-conmon-f7f78f14c67b33186886d0120facaa41ccebf683afdde28f56f53706cf8fa561.scope: Deactivated successfully.
Dec  3 18:35:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:35:58 compute-0 podman[410791]: 2025-12-03 18:35:58.497231615 +0000 UTC m=+0.068994165 container create 258485b034ba5f5d5790680cc4347a62e79e6f686e7d16cb196c823ddcd51896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tharp, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:35:58 compute-0 systemd[1]: Started libpod-conmon-258485b034ba5f5d5790680cc4347a62e79e6f686e7d16cb196c823ddcd51896.scope.
Dec  3 18:35:58 compute-0 podman[410791]: 2025-12-03 18:35:58.474808347 +0000 UTC m=+0.046570877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:35:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:35:58 compute-0 podman[410791]: 2025-12-03 18:35:58.628811356 +0000 UTC m=+0.200573916 container init 258485b034ba5f5d5790680cc4347a62e79e6f686e7d16cb196c823ddcd51896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tharp, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:35:58 compute-0 podman[410791]: 2025-12-03 18:35:58.646626091 +0000 UTC m=+0.218388651 container start 258485b034ba5f5d5790680cc4347a62e79e6f686e7d16cb196c823ddcd51896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tharp, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:35:58 compute-0 podman[410791]: 2025-12-03 18:35:58.652977176 +0000 UTC m=+0.224739736 container attach 258485b034ba5f5d5790680cc4347a62e79e6f686e7d16cb196c823ddcd51896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tharp, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:35:58 compute-0 competent_tharp[410807]: 167 167
Dec  3 18:35:58 compute-0 systemd[1]: libpod-258485b034ba5f5d5790680cc4347a62e79e6f686e7d16cb196c823ddcd51896.scope: Deactivated successfully.
Dec  3 18:35:58 compute-0 conmon[410807]: conmon 258485b034ba5f5d5790 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-258485b034ba5f5d5790680cc4347a62e79e6f686e7d16cb196c823ddcd51896.scope/container/memory.events
Dec  3 18:35:58 compute-0 podman[410791]: 2025-12-03 18:35:58.665322237 +0000 UTC m=+0.237084757 container died 258485b034ba5f5d5790680cc4347a62e79e6f686e7d16cb196c823ddcd51896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tharp, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:35:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b547d74e571e551d7ede282a38b147428103f11cf26b06bfc3f132dafbb26beb-merged.mount: Deactivated successfully.
Dec  3 18:35:58 compute-0 podman[410791]: 2025-12-03 18:35:58.721977891 +0000 UTC m=+0.293740411 container remove 258485b034ba5f5d5790680cc4347a62e79e6f686e7d16cb196c823ddcd51896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tharp, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:35:58 compute-0 systemd[1]: libpod-conmon-258485b034ba5f5d5790680cc4347a62e79e6f686e7d16cb196c823ddcd51896.scope: Deactivated successfully.
Dec  3 18:35:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:35:58 compute-0 podman[410832]: 2025-12-03 18:35:58.983039602 +0000 UTC m=+0.082786631 container create c7dd558dd1a9af32d4329db0c87553af83a82acdcb8ea380c66a211b22810500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 18:35:59 compute-0 podman[410832]: 2025-12-03 18:35:58.949141995 +0000 UTC m=+0.048889104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:35:59 compute-0 systemd[1]: Started libpod-conmon-c7dd558dd1a9af32d4329db0c87553af83a82acdcb8ea380c66a211b22810500.scope.
Dec  3 18:35:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab5d626975f5ddac45b22985d92841e01c2b616f92154e0ba462c04b8ebe726/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab5d626975f5ddac45b22985d92841e01c2b616f92154e0ba462c04b8ebe726/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab5d626975f5ddac45b22985d92841e01c2b616f92154e0ba462c04b8ebe726/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab5d626975f5ddac45b22985d92841e01c2b616f92154e0ba462c04b8ebe726/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:35:59 compute-0 podman[410832]: 2025-12-03 18:35:59.170694252 +0000 UTC m=+0.270441321 container init c7dd558dd1a9af32d4329db0c87553af83a82acdcb8ea380c66a211b22810500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:35:59 compute-0 podman[410832]: 2025-12-03 18:35:59.204777374 +0000 UTC m=+0.304524393 container start c7dd558dd1a9af32d4329db0c87553af83a82acdcb8ea380c66a211b22810500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:35:59 compute-0 podman[410832]: 2025-12-03 18:35:59.212742729 +0000 UTC m=+0.312489838 container attach c7dd558dd1a9af32d4329db0c87553af83a82acdcb8ea380c66a211b22810500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:35:59 compute-0 podman[158200]: time="2025-12-03T18:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:35:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44149 "" "Go-http-client/1.1"
Dec  3 18:35:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8543 "" "Go-http-client/1.1"
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]: {
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "osd_id": 1,
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "type": "bluestore"
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:    },
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "osd_id": 2,
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "type": "bluestore"
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:    },
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "osd_id": 0,
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:        "type": "bluestore"
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]:    }
Dec  3 18:36:00 compute-0 wonderful_cannon[410845]: }
Dec  3 18:36:00 compute-0 systemd[1]: libpod-c7dd558dd1a9af32d4329db0c87553af83a82acdcb8ea380c66a211b22810500.scope: Deactivated successfully.
Dec  3 18:36:00 compute-0 podman[410832]: 2025-12-03 18:36:00.510660589 +0000 UTC m=+1.610407628 container died c7dd558dd1a9af32d4329db0c87553af83a82acdcb8ea380c66a211b22810500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 18:36:00 compute-0 systemd[1]: libpod-c7dd558dd1a9af32d4329db0c87553af83a82acdcb8ea380c66a211b22810500.scope: Consumed 1.285s CPU time.
Dec  3 18:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ab5d626975f5ddac45b22985d92841e01c2b616f92154e0ba462c04b8ebe726-merged.mount: Deactivated successfully.
Dec  3 18:36:00 compute-0 podman[410832]: 2025-12-03 18:36:00.600729847 +0000 UTC m=+1.700476886 container remove c7dd558dd1a9af32d4329db0c87553af83a82acdcb8ea380c66a211b22810500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:36:00 compute-0 systemd[1]: libpod-conmon-c7dd558dd1a9af32d4329db0c87553af83a82acdcb8ea380c66a211b22810500.scope: Deactivated successfully.
Dec  3 18:36:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:36:00 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:36:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:36:00 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:36:00 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 1c14587f-3bb5-41fc-867c-3f5ad2b7930f does not exist
Dec  3 18:36:00 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 644901a8-77cd-4d6b-b1e2-f66354daa3ec does not exist
Dec  3 18:36:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:36:01 compute-0 nova_compute[348325]: 2025-12-03 18:36:01.027 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:36:01 compute-0 nova_compute[348325]: 2025-12-03 18:36:01.029 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:36:01 compute-0 nova_compute[348325]: 2025-12-03 18:36:01.029 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:36:01 compute-0 nova_compute[348325]: 2025-12-03 18:36:01.049 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 18:36:01 compute-0 nova_compute[348325]: 2025-12-03 18:36:01.049 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:36:01 compute-0 nova_compute[348325]: 2025-12-03 18:36:01.050 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:36:01 compute-0 nova_compute[348325]: 2025-12-03 18:36:01.050 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:36:01 compute-0 nova_compute[348325]: 2025-12-03 18:36:01.050 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:36:01 compute-0 openstack_network_exporter[365222]: ERROR   18:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:36:01 compute-0 openstack_network_exporter[365222]: ERROR   18:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:36:01 compute-0 openstack_network_exporter[365222]: ERROR   18:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:36:01 compute-0 openstack_network_exporter[365222]: ERROR   18:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:36:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:36:01 compute-0 openstack_network_exporter[365222]: ERROR   18:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:36:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:36:01 compute-0 nova_compute[348325]: 2025-12-03 18:36:01.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:36:01 compute-0 nova_compute[348325]: 2025-12-03 18:36:01.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:36:01 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:36:01 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:36:02 compute-0 nova_compute[348325]: 2025-12-03 18:36:02.477 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:36:02 compute-0 nova_compute[348325]: 2025-12-03 18:36:02.503 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:36:02 compute-0 nova_compute[348325]: 2025-12-03 18:36:02.504 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:36:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:36:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:03 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:03.931 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:36:03 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:03.932 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:36:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:36:05 compute-0 nova_compute[348325]: 2025-12-03 18:36:05.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:36:05 compute-0 nova_compute[348325]: 2025-12-03 18:36:05.641 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:05 compute-0 nova_compute[348325]: 2025-12-03 18:36:05.642 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:05 compute-0 nova_compute[348325]: 2025-12-03 18:36:05.642 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:05 compute-0 nova_compute[348325]: 2025-12-03 18:36:05.642 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:36:05 compute-0 nova_compute[348325]: 2025-12-03 18:36:05.643 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:36:06 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616214841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:36:06 compute-0 nova_compute[348325]: 2025-12-03 18:36:06.130 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:06 compute-0 nova_compute[348325]: 2025-12-03 18:36:06.578 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:36:06 compute-0 nova_compute[348325]: 2025-12-03 18:36:06.579 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4550MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:36:06 compute-0 nova_compute[348325]: 2025-12-03 18:36:06.580 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:06 compute-0 nova_compute[348325]: 2025-12-03 18:36:06.580 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:06 compute-0 nova_compute[348325]: 2025-12-03 18:36:06.700 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:36:06 compute-0 nova_compute[348325]: 2025-12-03 18:36:06.702 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:36:06 compute-0 nova_compute[348325]: 2025-12-03 18:36:06.745 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:06 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:06.935 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:36:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:36:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:36:07 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3762155786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:36:07 compute-0 nova_compute[348325]: 2025-12-03 18:36:07.277 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:07 compute-0 nova_compute[348325]: 2025-12-03 18:36:07.292 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:36:07 compute-0 nova_compute[348325]: 2025-12-03 18:36:07.326 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:36:07 compute-0 nova_compute[348325]: 2025-12-03 18:36:07.328 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:36:07 compute-0 nova_compute[348325]: 2025-12-03 18:36:07.328 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:08 compute-0 podman[410983]: 2025-12-03 18:36:08.956503727 +0000 UTC m=+0.115817368 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:36:08 compute-0 podman[410984]: 2025-12-03 18:36:08.958381853 +0000 UTC m=+0.106504820 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:36:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Dec  3 18:36:09 compute-0 podman[410985]: 2025-12-03 18:36:09.005766719 +0000 UTC m=+0.140067290 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, version=9.6, maintainer=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container)
Dec  3 18:36:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 6 op/s
Dec  3 18:36:11 compute-0 podman[411046]: 2025-12-03 18:36:11.968190686 +0000 UTC m=+0.121204318 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  3 18:36:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Dec  3 18:36:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:36:13
Dec  3 18:36:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:36:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:36:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'backups', 'vms', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr']
Dec  3 18:36:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:36:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:36:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:36:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:36:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:36:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:36:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:36:13 compute-0 podman[411064]: 2025-12-03 18:36:13.983021415 +0000 UTC m=+0.132033823 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, release=1214.1726694543, name=ubi9, com.redhat.component=ubi9-container, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 18:36:14 compute-0 podman[411065]: 2025-12-03 18:36:14.004037158 +0000 UTC m=+0.142726975 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 18:36:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:36:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:36:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:36:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:36:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:36:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:36:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:36:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:36:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:36:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:36:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.135135) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786976135252, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2068, "num_deletes": 251, "total_data_size": 3479055, "memory_usage": 3539760, "flush_reason": "Manual Compaction"}
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786976164689, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3390885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20913, "largest_seqno": 22980, "table_properties": {"data_size": 3381544, "index_size": 5899, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18781, "raw_average_key_size": 20, "raw_value_size": 3362793, "raw_average_value_size": 3585, "num_data_blocks": 267, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764786754, "oldest_key_time": 1764786754, "file_creation_time": 1764786976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 29606 microseconds, and 16923 cpu microseconds.
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.164758) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3390885 bytes OK
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.164782) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.167369) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.167413) EVENT_LOG_v1 {"time_micros": 1764786976167403, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.167437) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3470400, prev total WAL file size 3470400, number of live WAL files 2.
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.169084) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3311KB)], [50(7376KB)]
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786976169239, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10944000, "oldest_snapshot_seqno": -1}
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4715 keys, 9189643 bytes, temperature: kUnknown
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786976236662, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9189643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9155546, "index_size": 21187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 115467, "raw_average_key_size": 24, "raw_value_size": 9067678, "raw_average_value_size": 1923, "num_data_blocks": 893, "num_entries": 4715, "num_filter_entries": 4715, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764786976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.236944) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9189643 bytes
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.239505) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.1 rd, 136.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5233, records dropped: 518 output_compression: NoCompression
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.239526) EVENT_LOG_v1 {"time_micros": 1764786976239516, "job": 26, "event": "compaction_finished", "compaction_time_micros": 67503, "compaction_time_cpu_micros": 26478, "output_level": 6, "num_output_files": 1, "total_output_size": 9189643, "num_input_records": 5233, "num_output_records": 4715, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786976240553, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764786976242319, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.168701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.242500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.242504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.242506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.242507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:36:16 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:36:16.242509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:36:16 compute-0 nova_compute[348325]: 2025-12-03 18:36:16.340 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "1ca1fbdb-089c-4544-821e-0542089b8424" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:16 compute-0 nova_compute[348325]: 2025-12-03 18:36:16.341 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:16 compute-0 nova_compute[348325]: 2025-12-03 18:36:16.365 348329 DEBUG nova.compute.manager [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:36:16 compute-0 nova_compute[348325]: 2025-12-03 18:36:16.486 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:16 compute-0 nova_compute[348325]: 2025-12-03 18:36:16.488 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:16 compute-0 nova_compute[348325]: 2025-12-03 18:36:16.503 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:36:16 compute-0 nova_compute[348325]: 2025-12-03 18:36:16.504 348329 INFO nova.compute.claims [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:36:16 compute-0 nova_compute[348325]: 2025-12-03 18:36:16.640 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:36:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:36:17 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2733901625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.179 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.195 348329 DEBUG nova.compute.provider_tree [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.220 348329 DEBUG nova.scheduler.client.report [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.245 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.248 348329 DEBUG nova.compute.manager [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.311 348329 DEBUG nova.compute.manager [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.312 348329 DEBUG nova.network.neutron [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.359 348329 INFO nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.428 348329 DEBUG nova.compute.manager [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.767 348329 DEBUG nova.compute.manager [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.770 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.771 348329 INFO nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Creating image(s)#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.840 348329 DEBUG nova.storage.rbd_utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image 1ca1fbdb-089c-4544-821e-0542089b8424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.897 348329 DEBUG nova.storage.rbd_utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image 1ca1fbdb-089c-4544-821e-0542089b8424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.952 348329 DEBUG nova.storage.rbd_utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image 1ca1fbdb-089c-4544-821e-0542089b8424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.964 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "2a1fd6462a2f789b92c02c5037b663e095546067" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:17 compute-0 nova_compute[348325]: 2025-12-03 18:36:17.966 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "2a1fd6462a2f789b92c02c5037b663e095546067" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:18 compute-0 nova_compute[348325]: 2025-12-03 18:36:18.226 348329 DEBUG nova.virt.libvirt.imagebackend [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image locations are: [{'url': 'rbd://c1caf3ba-b2a5-5005-a11e-e955c344dccc/images/e68cd467-b4e6-45e0-8e55-984fda402294/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c1caf3ba-b2a5-5005-a11e-e955c344dccc/images/e68cd467-b4e6-45e0-8e55-984fda402294/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  3 18:36:18 compute-0 nova_compute[348325]: 2025-12-03 18:36:18.251 348329 WARNING oslo_policy.policy [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  3 18:36:18 compute-0 nova_compute[348325]: 2025-12-03 18:36:18.252 348329 WARNING oslo_policy.policy [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  3 18:36:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:36:19 compute-0 nova_compute[348325]: 2025-12-03 18:36:19.663 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:19 compute-0 nova_compute[348325]: 2025-12-03 18:36:19.762 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067.part --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:19 compute-0 nova_compute[348325]: 2025-12-03 18:36:19.765 348329 DEBUG nova.virt.images [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] e68cd467-b4e6-45e0-8e55-984fda402294 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  3 18:36:19 compute-0 nova_compute[348325]: 2025-12-03 18:36:19.767 348329 DEBUG nova.privsep.utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  3 18:36:19 compute-0 nova_compute[348325]: 2025-12-03 18:36:19.767 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067.part /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:20 compute-0 nova_compute[348325]: 2025-12-03 18:36:20.028 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067.part /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067.converted" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:20 compute-0 nova_compute[348325]: 2025-12-03 18:36:20.035 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:20 compute-0 nova_compute[348325]: 2025-12-03 18:36:20.123 348329 DEBUG nova.network.neutron [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Successfully created port: 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 18:36:20 compute-0 nova_compute[348325]: 2025-12-03 18:36:20.136 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067.converted --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:20 compute-0 nova_compute[348325]: 2025-12-03 18:36:20.137 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "2a1fd6462a2f789b92c02c5037b663e095546067" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:20 compute-0 nova_compute[348325]: 2025-12-03 18:36:20.203 348329 DEBUG nova.storage.rbd_utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image 1ca1fbdb-089c-4544-821e-0542089b8424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:36:20 compute-0 nova_compute[348325]: 2025-12-03 18:36:20.216 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 1ca1fbdb-089c-4544-821e-0542089b8424_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 717 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:36:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec  3 18:36:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec  3 18:36:21 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec  3 18:36:21 compute-0 podman[411223]: 2025-12-03 18:36:21.954304381 +0000 UTC m=+0.100755600 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:36:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec  3 18:36:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec  3 18:36:22 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.221 348329 DEBUG nova.network.neutron [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Successfully updated port: 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.242 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.242 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquired lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.243 348329 DEBUG nova.network.neutron [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.483 348329 DEBUG nova.network.neutron [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.567 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 1ca1fbdb-089c-4544-821e-0542089b8424_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.692 348329 DEBUG nova.storage.rbd_utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] resizing rbd image 1ca1fbdb-089c-4544-821e-0542089b8424_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.860 348329 DEBUG nova.objects.instance [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ca1fbdb-089c-4544-821e-0542089b8424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.909 348329 DEBUG nova.storage.rbd_utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image 1ca1fbdb-089c-4544-821e-0542089b8424_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.948 348329 DEBUG nova.storage.rbd_utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image 1ca1fbdb-089c-4544-821e-0542089b8424_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.959 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.961 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.962 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 18 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 132 KiB/s wr, 13 op/s
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.993 348329 DEBUG nova.compute.manager [req-c6882603-10ec-4fc3-9c2c-dc9fec6b402e req-f3a6b67e-9ae9-497f-b825-8eefa06eb837 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Received event network-changed-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.994 348329 DEBUG nova.compute.manager [req-c6882603-10ec-4fc3-9c2c-dc9fec6b402e req-f3a6b67e-9ae9-497f-b825-8eefa06eb837 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Refreshing instance network info cache due to event network-changed-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:36:22 compute-0 nova_compute[348325]: 2025-12-03 18:36:22.995 348329 DEBUG oslo_concurrency.lockutils [req-c6882603-10ec-4fc3-9c2c-dc9fec6b402e req-f3a6b67e-9ae9-497f-b825-8eefa06eb837 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:36:23 compute-0 nova_compute[348325]: 2025-12-03 18:36:23.001 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:23 compute-0 nova_compute[348325]: 2025-12-03 18:36:23.002 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:23 compute-0 nova_compute[348325]: 2025-12-03 18:36:23.051 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:23 compute-0 nova_compute[348325]: 2025-12-03 18:36:23.052 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:23 compute-0 nova_compute[348325]: 2025-12-03 18:36:23.089 348329 DEBUG nova.storage.rbd_utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image 1ca1fbdb-089c-4544-821e-0542089b8424_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:36:23 compute-0 nova_compute[348325]: 2025-12-03 18:36:23.098 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 1ca1fbdb-089c-4544-821e-0542089b8424_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:23.331 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:23.333 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:23.333 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:23 compute-0 nova_compute[348325]: 2025-12-03 18:36:23.782 348329 DEBUG nova.network.neutron [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.024 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Releasing lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.025 348329 DEBUG nova.compute.manager [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Instance network_info: |[{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.026 348329 DEBUG oslo_concurrency.lockutils [req-c6882603-10ec-4fc3-9c2c-dc9fec6b402e req-f3a6b67e-9ae9-497f-b825-8eefa06eb837 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.027 348329 DEBUG nova.network.neutron [req-c6882603-10ec-4fc3-9c2c-dc9fec6b402e req-f3a6b67e-9ae9-497f-b825-8eefa06eb837 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Refreshing network info cache for port 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.080 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 1ca1fbdb-089c-4544-821e-0542089b8424_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.982s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6787946864621998e-05 of space, bias 1.0, pg target 0.0050363840593865995 quantized to 32 (current 32)
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.273 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.274 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Ensure instance console log exists: /var/lib/nova/instances/1ca1fbdb-089c-4544-821e-0542089b8424/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.275 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.275 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.276 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.278 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Start _get_guest_xml network_info=[{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T18:35:07Z,direct_url=<?>,disk_format='qcow2',id=e68cd467-b4e6-45e0-8e55-984fda402294,min_disk=0,min_ram=0,name='cirros',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T18:35:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.287 348329 WARNING nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.295 348329 DEBUG nova.virt.libvirt.host [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.296 348329 DEBUG nova.virt.libvirt.host [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.301 348329 DEBUG nova.virt.libvirt.host [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.302 348329 DEBUG nova.virt.libvirt.host [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.302 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.302 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:35:14Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='6cb250a4-d28c-4125-888b-653b31e29275',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T18:35:07Z,direct_url=<?>,disk_format='qcow2',id=e68cd467-b4e6-45e0-8e55-984fda402294,min_disk=0,min_ram=0,name='cirros',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T18:35:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.303 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.303 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.304 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.304 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.304 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.304 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.305 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.305 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.305 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.306 348329 DEBUG nova.virt.hardware [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.323 348329 DEBUG nova.privsep.utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.325 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:36:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3439311275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.810 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:24 compute-0 nova_compute[348325]: 2025-12-03 18:36:24.813 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 31 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 666 KiB/s wr, 32 op/s
Dec  3 18:36:25 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:36:25 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3090278220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.329 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.370 348329 DEBUG nova.storage.rbd_utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image 1ca1fbdb-089c-4544-821e-0542089b8424_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.380 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:25 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:36:25 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054798407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.852 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.854 348329 DEBUG nova.virt.libvirt.vif [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-aytiw8mr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:36:17Z,user_data=None,user_id='56338958b09445f5af9aa9e4601a1a8a',uuid=1ca1fbdb-089c-4544-821e-0542089b8424,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.854 348329 DEBUG nova.network.os_vif_util [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.855 348329 DEBUG nova.network.os_vif_util [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:1b:25,bridge_name='br-int',has_traffic_filtering=True,id=3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d8505a1-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.857 348329 DEBUG nova.objects.instance [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ca1fbdb-089c-4544-821e-0542089b8424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.904 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:36:25 compute-0 nova_compute[348325]:  <uuid>1ca1fbdb-089c-4544-821e-0542089b8424</uuid>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  <name>instance-00000001</name>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  <memory>524288</memory>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <nova:name>test_0</nova:name>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:36:24</nova:creationTime>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <nova:flavor name="m1.small">
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <nova:memory>512</nova:memory>
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <nova:ephemeral>1</nova:ephemeral>
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <nova:user uuid="56338958b09445f5af9aa9e4601a1a8a">admin</nova:user>
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <nova:project uuid="d2770200bdb2436c90142fa2e5ddcd47">admin</nova:project>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="e68cd467-b4e6-45e0-8e55-984fda402294"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <nova:port uuid="3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b">
Dec  3 18:36:25 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="192.168.0.128" ipVersion="4"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <system>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <entry name="serial">1ca1fbdb-089c-4544-821e-0542089b8424</entry>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <entry name="uuid">1ca1fbdb-089c-4544-821e-0542089b8424</entry>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    </system>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  <os>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  </os>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  <features>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  </features>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/1ca1fbdb-089c-4544-821e-0542089b8424_disk">
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      </source>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/1ca1fbdb-089c-4544-821e-0542089b8424_disk.eph0">
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      </source>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <target dev="vdb" bus="virtio"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/1ca1fbdb-089c-4544-821e-0542089b8424_disk.config">
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      </source>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:36:25 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:ea:1b:25"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <target dev="tap3d8505a1-5c"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/1ca1fbdb-089c-4544-821e-0542089b8424/console.log" append="off"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <video>
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    </video>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:36:25 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:36:25 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:36:25 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:36:25 compute-0 nova_compute[348325]: </domain>
Dec  3 18:36:25 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.905 348329 DEBUG nova.compute.manager [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Preparing to wait for external event network-vif-plugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.906 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.906 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.906 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.907 348329 DEBUG nova.virt.libvirt.vif [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-aytiw8mr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:36:17Z,user_data=None,user_id='56338958b09445f5af9aa9e4601a1a8a',uuid=1ca1fbdb-089c-4544-821e-0542089b8424,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.908 348329 DEBUG nova.network.os_vif_util [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.911 348329 DEBUG nova.network.os_vif_util [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:1b:25,bridge_name='br-int',has_traffic_filtering=True,id=3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d8505a1-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.912 348329 DEBUG os_vif [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:1b:25,bridge_name='br-int',has_traffic_filtering=True,id=3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d8505a1-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:36:25 compute-0 podman[411537]: 2025-12-03 18:36:25.921677899 +0000 UTC m=+0.082695579 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.956 348329 DEBUG ovsdbapp.backend.ovs_idl [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.957 348329 DEBUG ovsdbapp.backend.ovs_idl [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.957 348329 DEBUG ovsdbapp.backend.ovs_idl [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.958 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.960 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.960 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.961 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:25 compute-0 podman[411536]: 2025-12-03 18:36:25.962053604 +0000 UTC m=+0.128571619 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.963 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.966 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.975 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.975 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.975 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:36:25 compute-0 nova_compute[348325]: 2025-12-03 18:36:25.976 348329 INFO oslo.privsep.daemon [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpjs5kby17/privsep.sock']#033[00m
Dec  3 18:36:26 compute-0 nova_compute[348325]: 2025-12-03 18:36:26.771 348329 INFO oslo.privsep.daemon [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 18:36:26 compute-0 nova_compute[348325]: 2025-12-03 18:36:26.596 411586 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 18:36:26 compute-0 nova_compute[348325]: 2025-12-03 18:36:26.600 411586 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 18:36:26 compute-0 nova_compute[348325]: 2025-12-03 18:36:26.602 411586 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  3 18:36:26 compute-0 nova_compute[348325]: 2025-12-03 18:36:26.602 411586 INFO oslo.privsep.daemon [-] privsep daemon running as pid 411586#033[00m
Dec  3 18:36:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 47 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 62 op/s
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.100 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.101 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d8505a1-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.102 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3d8505a1-5c, col_values=(('external_ids', {'iface-id': '3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:1b:25', 'vm-uuid': '1ca1fbdb-089c-4544-821e-0542089b8424'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.104 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:27 compute-0 NetworkManager[49087]: <info>  [1764786987.1059] manager: (tap3d8505a1-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.107 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.119 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.120 348329 INFO os_vif [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:1b:25,bridge_name='br-int',has_traffic_filtering=True,id=3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d8505a1-5c')#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.327 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.328 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.329 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.330 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No VIF found with MAC fa:16:3e:ea:1b:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.331 348329 INFO nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Using config drive#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.387 348329 DEBUG nova.storage.rbd_utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image 1ca1fbdb-089c-4544-821e-0542089b8424_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:36:27 compute-0 nova_compute[348325]: 2025-12-03 18:36:27.406 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec  3 18:36:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec  3 18:36:28 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec  3 18:36:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  3 18:36:29 compute-0 nova_compute[348325]: 2025-12-03 18:36:29.230 348329 DEBUG nova.network.neutron [req-c6882603-10ec-4fc3-9c2c-dc9fec6b402e req-f3a6b67e-9ae9-497f-b825-8eefa06eb837 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updated VIF entry in instance network info cache for port 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:36:29 compute-0 nova_compute[348325]: 2025-12-03 18:36:29.230 348329 DEBUG nova.network.neutron [req-c6882603-10ec-4fc3-9c2c-dc9fec6b402e req-f3a6b67e-9ae9-497f-b825-8eefa06eb837 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:36:29 compute-0 nova_compute[348325]: 2025-12-03 18:36:29.251 348329 DEBUG oslo_concurrency.lockutils [req-c6882603-10ec-4fc3-9c2c-dc9fec6b402e req-f3a6b67e-9ae9-497f-b825-8eefa06eb837 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:36:29 compute-0 nova_compute[348325]: 2025-12-03 18:36:29.569 348329 INFO nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Creating config drive at /var/lib/nova/instances/1ca1fbdb-089c-4544-821e-0542089b8424/disk.config#033[00m
Dec  3 18:36:29 compute-0 nova_compute[348325]: 2025-12-03 18:36:29.579 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ca1fbdb-089c-4544-821e-0542089b8424/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa_hc7xsx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:29 compute-0 nova_compute[348325]: 2025-12-03 18:36:29.713 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ca1fbdb-089c-4544-821e-0542089b8424/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa_hc7xsx" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:29 compute-0 podman[158200]: time="2025-12-03T18:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:36:29 compute-0 nova_compute[348325]: 2025-12-03 18:36:29.754 348329 DEBUG nova.storage.rbd_utils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image 1ca1fbdb-089c-4544-821e-0542089b8424_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:36:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:36:29 compute-0 nova_compute[348325]: 2025-12-03 18:36:29.767 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ca1fbdb-089c-4544-821e-0542089b8424/disk.config 1ca1fbdb-089c-4544-821e-0542089b8424_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:36:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8141 "" "Go-http-client/1.1"
Dec  3 18:36:30 compute-0 nova_compute[348325]: 2025-12-03 18:36:30.260 348329 DEBUG oslo_concurrency.processutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ca1fbdb-089c-4544-821e-0542089b8424/disk.config 1ca1fbdb-089c-4544-821e-0542089b8424_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:36:30 compute-0 nova_compute[348325]: 2025-12-03 18:36:30.261 348329 INFO nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Deleting local config drive /var/lib/nova/instances/1ca1fbdb-089c-4544-821e-0542089b8424/disk.config because it was imported into RBD.#033[00m
Dec  3 18:36:30 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 18:36:30 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 18:36:30 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  3 18:36:30 compute-0 kernel: tap3d8505a1-5c: entered promiscuous mode
Dec  3 18:36:30 compute-0 NetworkManager[49087]: <info>  [1764786990.4683] manager: (tap3d8505a1-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Dec  3 18:36:30 compute-0 nova_compute[348325]: 2025-12-03 18:36:30.472 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:30 compute-0 ovn_controller[89305]: 2025-12-03T18:36:30Z|00027|binding|INFO|Claiming lport 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b for this chassis.
Dec  3 18:36:30 compute-0 ovn_controller[89305]: 2025-12-03T18:36:30Z|00028|binding|INFO|3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b: Claiming fa:16:3e:ea:1b:25 192.168.0.128
Dec  3 18:36:30 compute-0 nova_compute[348325]: 2025-12-03 18:36:30.481 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:30.502 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:1b:25 192.168.0.128'], port_security=['fa:16:3e:ea:1b:25 192.168.0.128'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.128/24', 'neutron:device_id': '1ca1fbdb-089c-4544-821e-0542089b8424', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e48052e-a2fd-4fc1-8ebd-22e3b6e0bd66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12999ead-9a54-49b3-a532-a5f8bdddaf16, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:36:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:30.505 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b in datapath 85c8d446-ad7f-4d1b-a311-89b0b07e8aad bound to our chassis#033[00m
Dec  3 18:36:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:30.512 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85c8d446-ad7f-4d1b-a311-89b0b07e8aad#033[00m
Dec  3 18:36:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:30.515 286999 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpivijuk72/privsep.sock']#033[00m
Dec  3 18:36:30 compute-0 systemd-udevd[411683]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:36:30 compute-0 NetworkManager[49087]: <info>  [1764786990.5529] device (tap3d8505a1-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:36:30 compute-0 NetworkManager[49087]: <info>  [1764786990.5543] device (tap3d8505a1-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:36:30 compute-0 systemd-machined[138702]: New machine qemu-1-instance-00000001.
Dec  3 18:36:30 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  3 18:36:30 compute-0 nova_compute[348325]: 2025-12-03 18:36:30.591 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:30 compute-0 ovn_controller[89305]: 2025-12-03T18:36:30Z|00029|binding|INFO|Setting lport 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b ovn-installed in OVS
Dec  3 18:36:30 compute-0 ovn_controller[89305]: 2025-12-03T18:36:30Z|00030|binding|INFO|Setting lport 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b up in Southbound
Dec  3 18:36:30 compute-0 nova_compute[348325]: 2025-12-03 18:36:30.601 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Dec  3 18:36:31 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:31.352 286999 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 18:36:31 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:31.354 286999 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpivijuk72/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  3 18:36:31 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:31.179 411759 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 18:36:31 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:31.185 411759 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 18:36:31 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:31.188 411759 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  3 18:36:31 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:31.188 411759 INFO oslo.privsep.daemon [-] privsep daemon running as pid 411759#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.355 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764786991.3547325, 1ca1fbdb-089c-4544-821e-0542089b8424 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.356 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] VM Started (Lifecycle Event)#033[00m
Dec  3 18:36:31 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:31.358 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[637df21e-8fcc-4189-8149-fd5d48c410db]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:31 compute-0 openstack_network_exporter[365222]: ERROR   18:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:36:31 compute-0 openstack_network_exporter[365222]: ERROR   18:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:36:31 compute-0 openstack_network_exporter[365222]: ERROR   18:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:36:31 compute-0 openstack_network_exporter[365222]: ERROR   18:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:36:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:36:31 compute-0 openstack_network_exporter[365222]: ERROR   18:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:36:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.442 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.450 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764786991.3548853, 1ca1fbdb-089c-4544-821e-0542089b8424 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.450 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] VM Paused (Lifecycle Event)#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.477 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.484 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.510 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.931 348329 DEBUG nova.compute.manager [req-677de5f8-f7aa-4f0d-89b6-139276a04f32 req-8a5fe65a-e1bc-449b-96b1-9e15a979fe3d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Received event network-vif-plugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.932 348329 DEBUG oslo_concurrency.lockutils [req-677de5f8-f7aa-4f0d-89b6-139276a04f32 req-8a5fe65a-e1bc-449b-96b1-9e15a979fe3d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.933 348329 DEBUG oslo_concurrency.lockutils [req-677de5f8-f7aa-4f0d-89b6-139276a04f32 req-8a5fe65a-e1bc-449b-96b1-9e15a979fe3d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.934 348329 DEBUG oslo_concurrency.lockutils [req-677de5f8-f7aa-4f0d-89b6-139276a04f32 req-8a5fe65a-e1bc-449b-96b1-9e15a979fe3d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.934 348329 DEBUG nova.compute.manager [req-677de5f8-f7aa-4f0d-89b6-139276a04f32 req-8a5fe65a-e1bc-449b-96b1-9e15a979fe3d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Processing event network-vif-plugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.935 348329 DEBUG nova.compute.manager [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.942 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.943 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764786991.9422643, 1ca1fbdb-089c-4544-821e-0542089b8424 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.943 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.951 348329 INFO nova.virt.libvirt.driver [-] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Instance spawned successfully.#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.952 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.975 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.989 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.998 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:36:31 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.998 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:36:32 compute-0 nova_compute[348325]: 2025-12-03 18:36:31.999 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:36:32 compute-0 nova_compute[348325]: 2025-12-03 18:36:32.000 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:36:32 compute-0 nova_compute[348325]: 2025-12-03 18:36:32.000 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:36:32 compute-0 nova_compute[348325]: 2025-12-03 18:36:32.001 348329 DEBUG nova.virt.libvirt.driver [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:36:32 compute-0 nova_compute[348325]: 2025-12-03 18:36:32.009 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:36:32 compute-0 nova_compute[348325]: 2025-12-03 18:36:32.076 348329 INFO nova.compute.manager [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Took 14.31 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:36:32 compute-0 nova_compute[348325]: 2025-12-03 18:36:32.077 348329 DEBUG nova.compute.manager [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:36:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:32.089 411759 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:32.089 411759 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:32.089 411759 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:32 compute-0 nova_compute[348325]: 2025-12-03 18:36:32.105 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:32 compute-0 nova_compute[348325]: 2025-12-03 18:36:32.157 348329 INFO nova.compute.manager [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Took 15.72 seconds to build instance.#033[00m
Dec  3 18:36:32 compute-0 nova_compute[348325]: 2025-12-03 18:36:32.181 348329 DEBUG oslo_concurrency.lockutils [None req-f2ae3f18-e23a-4527-80cc-6b1d59eec94f 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:32 compute-0 nova_compute[348325]: 2025-12-03 18:36:32.339 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:32 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  3 18:36:32 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  3 18:36:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:32.824 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8800c4-2588-46af-94e0-3e50dedc5096]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:32.827 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85c8d446-a1 in ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 18:36:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:32.831 411759 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85c8d446-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 18:36:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:32.831 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6b9ec1-a9b6-40fc-902a-688947cb0194]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:32.837 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[4943e672-8c38-4039-9a99-544be7c4f629]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:32.886 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2167bf-5fa7-454f-b2ad-1b3ee2396762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:32.926 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[51fdaaaa-ec89-431e-85a2-2e094af4ab77]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:32.930 286999 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpol3gxfpw/privsep.sock']#033[00m
Dec  3 18:36:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Dec  3 18:36:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:33.696 286999 INFO oslo_service.service [-] Child 411793 exited with status 0#033[00m
Dec  3 18:36:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:33.698 286999 WARNING oslo_service.service [-] pid 411793 not in child list#033[00m
Dec  3 18:36:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:33.701 286999 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 18:36:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:33.702 286999 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpol3gxfpw/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  3 18:36:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:33.555 411797 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 18:36:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:33.560 411797 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 18:36:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:33.562 411797 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  3 18:36:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:33.562 411797 INFO oslo.privsep.daemon [-] privsep daemon running as pid 411797#033[00m
Dec  3 18:36:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:33.707 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[b99793ab-a26b-438d-a4ee-1c8c7a726b37]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:34 compute-0 nova_compute[348325]: 2025-12-03 18:36:34.028 348329 DEBUG nova.compute.manager [req-c51b4e11-bc9e-4ec3-a9be-4b63759567b9 req-52611f2f-ebab-4306-9a77-0db7d7f73f5c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Received event network-vif-plugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:36:34 compute-0 nova_compute[348325]: 2025-12-03 18:36:34.029 348329 DEBUG oslo_concurrency.lockutils [req-c51b4e11-bc9e-4ec3-a9be-4b63759567b9 req-52611f2f-ebab-4306-9a77-0db7d7f73f5c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:34 compute-0 nova_compute[348325]: 2025-12-03 18:36:34.029 348329 DEBUG oslo_concurrency.lockutils [req-c51b4e11-bc9e-4ec3-a9be-4b63759567b9 req-52611f2f-ebab-4306-9a77-0db7d7f73f5c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:34 compute-0 nova_compute[348325]: 2025-12-03 18:36:34.029 348329 DEBUG oslo_concurrency.lockutils [req-c51b4e11-bc9e-4ec3-a9be-4b63759567b9 req-52611f2f-ebab-4306-9a77-0db7d7f73f5c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:34 compute-0 nova_compute[348325]: 2025-12-03 18:36:34.030 348329 DEBUG nova.compute.manager [req-c51b4e11-bc9e-4ec3-a9be-4b63759567b9 req-52611f2f-ebab-4306-9a77-0db7d7f73f5c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] No waiting events found dispatching network-vif-plugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:36:34 compute-0 nova_compute[348325]: 2025-12-03 18:36:34.030 348329 WARNING nova.compute.manager [req-c51b4e11-bc9e-4ec3-a9be-4b63759567b9 req-52611f2f-ebab-4306-9a77-0db7d7f73f5c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Received unexpected event network-vif-plugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b for instance with vm_state active and task_state None.#033[00m
Dec  3 18:36:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:34.328 411797 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:36:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:34.329 411797 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:36:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:34.329 411797 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:36:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 456 KiB/s rd, 1.1 MiB/s wr, 55 op/s
Dec  3 18:36:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:34.978 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd5b53c-1abe-427b-b0a8-83c6207b158c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.013 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[ea78df64-37a0-408f-8cfe-a5a821f9321d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:35 compute-0 NetworkManager[49087]: <info>  [1764786995.0151] manager: (tap85c8d446-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Dec  3 18:36:35 compute-0 systemd-udevd[411809]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.055 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[083ad54a-c583-439c-ad32-6bd0068062b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.062 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[a3745da9-fa38-4dbc-979f-ab598daae666]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:35 compute-0 NetworkManager[49087]: <info>  [1764786995.0971] device (tap85c8d446-a0): carrier: link connected
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.103 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[373677ef-0862-4242-aa89-00c6745283a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.129 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[72880583-643b-450c-bc24-c204b50705f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85c8d446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:c1:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527503, 'reachable_time': 33358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411827, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.153 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe30dd5-2ad6-4507-a39d-a1fab51637be]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:c177'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527503, 'tstamp': 527503}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411828, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.177 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[cde8979d-a659-4b80-ba6a-5476c566e1de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85c8d446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:c1:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527503, 'reachable_time': 33358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 411829, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.217 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[371bc7e5-c486-4783-a3ca-e9e9946862e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.308 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[8364cd43-e71b-4a61-bf81-0abfcd6bb66c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.311 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85c8d446-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.312 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.313 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85c8d446-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:36:35 compute-0 nova_compute[348325]: 2025-12-03 18:36:35.316 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:35 compute-0 kernel: tap85c8d446-a0: entered promiscuous mode
Dec  3 18:36:35 compute-0 NetworkManager[49087]: <info>  [1764786995.3183] manager: (tap85c8d446-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Dec  3 18:36:35 compute-0 nova_compute[348325]: 2025-12-03 18:36:35.323 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.329 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85c8d446-a0, col_values=(('external_ids', {'iface-id': '4db8340d-afa3-4a82-bd51-bca0a752f53f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:36:35 compute-0 nova_compute[348325]: 2025-12-03 18:36:35.331 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:35 compute-0 ovn_controller[89305]: 2025-12-03T18:36:35Z|00031|binding|INFO|Releasing lport 4db8340d-afa3-4a82-bd51-bca0a752f53f from this chassis (sb_readonly=0)
Dec  3 18:36:35 compute-0 nova_compute[348325]: 2025-12-03 18:36:35.359 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.360 286999 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85c8d446-ad7f-4d1b-a311-89b0b07e8aad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85c8d446-ad7f-4d1b-a311-89b0b07e8aad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.361 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4971af-cdd5-4c9e-a0ba-7189aeb0109c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.363 286999 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: global
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    log         /dev/log local0 debug
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    log-tag     haproxy-metadata-proxy-85c8d446-ad7f-4d1b-a311-89b0b07e8aad
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    user        root
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    group       root
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    maxconn     1024
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    pidfile     /var/lib/neutron/external/pids/85c8d446-ad7f-4d1b-a311-89b0b07e8aad.pid.haproxy
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    daemon
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: defaults
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    log global
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    mode http
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    option httplog
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    option dontlognull
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    option http-server-close
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    option forwardfor
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    retries                 3
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    timeout http-request    30s
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    timeout connect         30s
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    timeout client          32s
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    timeout server          32s
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    timeout http-keep-alive 30s
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: listen listener
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    bind 169.254.169.254:80
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]:    http-request add-header X-OVN-Network-ID 85c8d446-ad7f-4d1b-a311-89b0b07e8aad
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 18:36:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:35.363 286999 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'env', 'PROCESS_TAG=haproxy-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85c8d446-ad7f-4d1b-a311-89b0b07e8aad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 18:36:35 compute-0 podman[411861]: 2025-12-03 18:36:35.891000909 +0000 UTC m=+0.089089215 container create 40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:36:35 compute-0 systemd[1]: Started libpod-conmon-40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533.scope.
Dec  3 18:36:35 compute-0 podman[411861]: 2025-12-03 18:36:35.855204396 +0000 UTC m=+0.053292662 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:36:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f361f35212d4696e56f07bf75702a4c2a7b158dc73201a706cc1cf8997c680/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 18:36:36 compute-0 podman[411861]: 2025-12-03 18:36:36.033376655 +0000 UTC m=+0.231464971 container init 40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:36:36 compute-0 podman[411861]: 2025-12-03 18:36:36.051171279 +0000 UTC m=+0.249259535 container start 40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:36:36 compute-0 neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad[411876]: [NOTICE]   (411880) : New worker (411882) forked
Dec  3 18:36:36 compute-0 neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad[411876]: [NOTICE]   (411880) : Loading success.
Dec  3 18:36:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 17 KiB/s wr, 69 op/s
Dec  3 18:36:37 compute-0 nova_compute[348325]: 2025-12-03 18:36:37.341 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:37 compute-0 nova_compute[348325]: 2025-12-03 18:36:37.344 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:36:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/259762796' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:36:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:36:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/259762796' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:36:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 14 KiB/s wr, 64 op/s
Dec  3 18:36:39 compute-0 podman[411892]: 2025-12-03 18:36:39.921225051 +0000 UTC m=+0.084606037 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:36:39 compute-0 podman[411891]: 2025-12-03 18:36:39.927740959 +0000 UTC m=+0.096401824 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, container_name=multipathd)
Dec  3 18:36:39 compute-0 podman[411893]: 2025-12-03 18:36:39.944629821 +0000 UTC m=+0.104092321 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Dec  3 18:36:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Dec  3 18:36:42 compute-0 nova_compute[348325]: 2025-12-03 18:36:42.345 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:42 compute-0 nova_compute[348325]: 2025-12-03 18:36:42.348 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:42 compute-0 podman[411950]: 2025-12-03 18:36:42.905804878 +0000 UTC m=+0.067847216 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:36:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Dec  3 18:36:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:36:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:36:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:36:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:36:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:36:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:36:44 compute-0 ovn_controller[89305]: 2025-12-03T18:36:44Z|00032|binding|INFO|Releasing lport 4db8340d-afa3-4a82-bd51-bca0a752f53f from this chassis (sb_readonly=0)
Dec  3 18:36:44 compute-0 NetworkManager[49087]: <info>  [1764787004.1091] manager: (patch-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Dec  3 18:36:44 compute-0 NetworkManager[49087]: <info>  [1764787004.1094] device (patch-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 18:36:44 compute-0 NetworkManager[49087]: <info>  [1764787004.1104] manager: (patch-br-int-to-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Dec  3 18:36:44 compute-0 NetworkManager[49087]: <info>  [1764787004.1106] device (patch-br-int-to-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 18:36:44 compute-0 NetworkManager[49087]: <info>  [1764787004.1112] manager: (patch-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec  3 18:36:44 compute-0 NetworkManager[49087]: <info>  [1764787004.1117] manager: (patch-br-int-to-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Dec  3 18:36:44 compute-0 NetworkManager[49087]: <info>  [1764787004.1120] device (patch-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  3 18:36:44 compute-0 NetworkManager[49087]: <info>  [1764787004.1122] device (patch-br-int-to-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  3 18:36:44 compute-0 nova_compute[348325]: 2025-12-03 18:36:44.111 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:44 compute-0 ovn_controller[89305]: 2025-12-03T18:36:44Z|00033|binding|INFO|Releasing lport 4db8340d-afa3-4a82-bd51-bca0a752f53f from this chassis (sb_readonly=0)
Dec  3 18:36:44 compute-0 nova_compute[348325]: 2025-12-03 18:36:44.144 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:44 compute-0 nova_compute[348325]: 2025-12-03 18:36:44.147 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:44 compute-0 nova_compute[348325]: 2025-12-03 18:36:44.657 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:44.658 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:36:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:44.660 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:36:44 compute-0 nova_compute[348325]: 2025-12-03 18:36:44.780 348329 DEBUG nova.compute.manager [req-091e0528-116e-4dfb-b464-124a5ae054a6 req-37cc7f13-0b0b-4633-9630-bd039b4668e9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Received event network-changed-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:36:44 compute-0 nova_compute[348325]: 2025-12-03 18:36:44.780 348329 DEBUG nova.compute.manager [req-091e0528-116e-4dfb-b464-124a5ae054a6 req-37cc7f13-0b0b-4633-9630-bd039b4668e9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Refreshing instance network info cache due to event network-changed-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:36:44 compute-0 nova_compute[348325]: 2025-12-03 18:36:44.780 348329 DEBUG oslo_concurrency.lockutils [req-091e0528-116e-4dfb-b464-124a5ae054a6 req-37cc7f13-0b0b-4633-9630-bd039b4668e9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:36:44 compute-0 nova_compute[348325]: 2025-12-03 18:36:44.780 348329 DEBUG oslo_concurrency.lockutils [req-091e0528-116e-4dfb-b464-124a5ae054a6 req-37cc7f13-0b0b-4633-9630-bd039b4668e9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:36:44 compute-0 nova_compute[348325]: 2025-12-03 18:36:44.781 348329 DEBUG nova.network.neutron [req-091e0528-116e-4dfb-b464-124a5ae054a6 req-37cc7f13-0b0b-4633-9630-bd039b4668e9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Refreshing network info cache for port 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:36:44 compute-0 podman[411969]: 2025-12-03 18:36:44.81999499 +0000 UTC m=+0.110144089 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, version=9.4)
Dec  3 18:36:44 compute-0 podman[411970]: 2025-12-03 18:36:44.83392843 +0000 UTC m=+0.115280014 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible)
Dec  3 18:36:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 56 op/s
Dec  3 18:36:46 compute-0 nova_compute[348325]: 2025-12-03 18:36:46.490 348329 DEBUG nova.network.neutron [req-091e0528-116e-4dfb-b464-124a5ae054a6 req-37cc7f13-0b0b-4633-9630-bd039b4668e9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updated VIF entry in instance network info cache for port 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:36:46 compute-0 nova_compute[348325]: 2025-12-03 18:36:46.499 348329 DEBUG nova.network.neutron [req-091e0528-116e-4dfb-b464-124a5ae054a6 req-37cc7f13-0b0b-4633-9630-bd039b4668e9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:36:46 compute-0 nova_compute[348325]: 2025-12-03 18:36:46.572 348329 DEBUG oslo_concurrency.lockutils [req-091e0528-116e-4dfb-b464-124a5ae054a6 req-37cc7f13-0b0b-4633-9630-bd039b4668e9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:36:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 36 op/s
Dec  3 18:36:47 compute-0 nova_compute[348325]: 2025-12-03 18:36:47.349 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:47 compute-0 nova_compute[348325]: 2025-12-03 18:36:47.352 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:36:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 4 op/s
Dec  3 18:36:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 3 op/s
Dec  3 18:36:52 compute-0 nova_compute[348325]: 2025-12-03 18:36:52.352 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:36:52.662 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:36:52 compute-0 podman[412006]: 2025-12-03 18:36:52.949934218 +0000 UTC m=+0.104984874 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:36:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:36:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:36:56 compute-0 podman[412031]: 2025-12-03 18:36:56.969888968 +0000 UTC m=+0.120108493 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:36:56 compute-0 podman[412030]: 2025-12-03 18:36:56.983763786 +0000 UTC m=+0.145818890 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 18:36:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:36:57 compute-0 nova_compute[348325]: 2025-12-03 18:36:57.355 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:36:57 compute-0 nova_compute[348325]: 2025-12-03 18:36:57.357 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:36:57 compute-0 nova_compute[348325]: 2025-12-03 18:36:57.357 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  3 18:36:57 compute-0 nova_compute[348325]: 2025-12-03 18:36:57.357 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  3 18:36:57 compute-0 nova_compute[348325]: 2025-12-03 18:36:57.360 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:57 compute-0 nova_compute[348325]: 2025-12-03 18:36:57.361 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  3 18:36:57 compute-0 nova_compute[348325]: 2025-12-03 18:36:57.364 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:36:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:36:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:36:59 compute-0 podman[158200]: time="2025-12-03T18:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:36:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:36:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8630 "" "Go-http-client/1.1"
Dec  3 18:37:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:37:01 compute-0 nova_compute[348325]: 2025-12-03 18:37:01.329 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:37:01 compute-0 nova_compute[348325]: 2025-12-03 18:37:01.329 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:37:01 compute-0 nova_compute[348325]: 2025-12-03 18:37:01.330 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:37:01 compute-0 openstack_network_exporter[365222]: ERROR   18:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:37:01 compute-0 openstack_network_exporter[365222]: ERROR   18:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:37:01 compute-0 openstack_network_exporter[365222]: ERROR   18:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:37:01 compute-0 openstack_network_exporter[365222]: ERROR   18:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:37:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:37:01 compute-0 openstack_network_exporter[365222]: ERROR   18:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:37:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:37:02 compute-0 nova_compute[348325]: 2025-12-03 18:37:02.242 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:37:02 compute-0 nova_compute[348325]: 2025-12-03 18:37:02.243 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:37:02 compute-0 nova_compute[348325]: 2025-12-03 18:37:02.243 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:37:02 compute-0 nova_compute[348325]: 2025-12-03 18:37:02.243 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ca1fbdb-089c-4544-821e-0542089b8424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:37:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 18:37:02 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:37:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:37:02 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:37:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:37:02 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:37:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:37:02 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:37:02 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev a34cccba-3746-4d38-8e38-b64854474d5b does not exist
Dec  3 18:37:02 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f3c3fafb-46c6-44c7-8521-1826fadebb27 does not exist
Dec  3 18:37:02 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev dcb67f48-d85d-4dfb-99ba-4e6e154eb903 does not exist
Dec  3 18:37:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:37:02 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:37:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:37:02 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:37:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:37:02 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:37:02 compute-0 nova_compute[348325]: 2025-12-03 18:37:02.362 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:37:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:37:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:37:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:37:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:37:03 compute-0 podman[412341]: 2025-12-03 18:37:03.20148594 +0000 UTC m=+0.066059903 container create c6845237ad6983be4c988c2a0621d524abe78d78db21afb7c3d4853445098a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:37:03 compute-0 systemd[1]: Started libpod-conmon-c6845237ad6983be4c988c2a0621d524abe78d78db21afb7c3d4853445098a01.scope.
Dec  3 18:37:03 compute-0 podman[412341]: 2025-12-03 18:37:03.176865449 +0000 UTC m=+0.041439442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:37:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:37:03 compute-0 podman[412341]: 2025-12-03 18:37:03.326120672 +0000 UTC m=+0.190694655 container init c6845237ad6983be4c988c2a0621d524abe78d78db21afb7c3d4853445098a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:37:03 compute-0 podman[412341]: 2025-12-03 18:37:03.336562137 +0000 UTC m=+0.201136100 container start c6845237ad6983be4c988c2a0621d524abe78d78db21afb7c3d4853445098a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:37:03 compute-0 podman[412341]: 2025-12-03 18:37:03.342232855 +0000 UTC m=+0.206806818 container attach c6845237ad6983be4c988c2a0621d524abe78d78db21afb7c3d4853445098a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:37:03 compute-0 gifted_mccarthy[412357]: 167 167
Dec  3 18:37:03 compute-0 systemd[1]: libpod-c6845237ad6983be4c988c2a0621d524abe78d78db21afb7c3d4853445098a01.scope: Deactivated successfully.
Dec  3 18:37:03 compute-0 podman[412341]: 2025-12-03 18:37:03.348791425 +0000 UTC m=+0.213365428 container died c6845237ad6983be4c988c2a0621d524abe78d78db21afb7c3d4853445098a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:37:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0642fbbf4c96bf1c0f848ae190fb8c78f94318616c6c017d7d7367639d5f2c2-merged.mount: Deactivated successfully.
Dec  3 18:37:03 compute-0 podman[412341]: 2025-12-03 18:37:03.423609691 +0000 UTC m=+0.288183664 container remove c6845237ad6983be4c988c2a0621d524abe78d78db21afb7c3d4853445098a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:37:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:03 compute-0 systemd[1]: libpod-conmon-c6845237ad6983be4c988c2a0621d524abe78d78db21afb7c3d4853445098a01.scope: Deactivated successfully.
Dec  3 18:37:03 compute-0 podman[412380]: 2025-12-03 18:37:03.688573499 +0000 UTC m=+0.072756687 container create a97135b603701f083b69dfbc443136fbbe17fcff6e79b2fbc34c5f807becb5ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:37:03 compute-0 podman[412380]: 2025-12-03 18:37:03.664590464 +0000 UTC m=+0.048773672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:37:03 compute-0 systemd[1]: Started libpod-conmon-a97135b603701f083b69dfbc443136fbbe17fcff6e79b2fbc34c5f807becb5ed.scope.
Dec  3 18:37:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d671712864854097ecfd2b163d682fa45fea53eec56ded393df8c82b0bbe2015/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d671712864854097ecfd2b163d682fa45fea53eec56ded393df8c82b0bbe2015/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d671712864854097ecfd2b163d682fa45fea53eec56ded393df8c82b0bbe2015/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d671712864854097ecfd2b163d682fa45fea53eec56ded393df8c82b0bbe2015/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d671712864854097ecfd2b163d682fa45fea53eec56ded393df8c82b0bbe2015/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:03 compute-0 podman[412380]: 2025-12-03 18:37:03.813608051 +0000 UTC m=+0.197791269 container init a97135b603701f083b69dfbc443136fbbe17fcff6e79b2fbc34c5f807becb5ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_merkle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 18:37:03 compute-0 podman[412380]: 2025-12-03 18:37:03.830693138 +0000 UTC m=+0.214876326 container start a97135b603701f083b69dfbc443136fbbe17fcff6e79b2fbc34c5f807becb5ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:37:03 compute-0 podman[412380]: 2025-12-03 18:37:03.834914931 +0000 UTC m=+0.219098119 container attach a97135b603701f083b69dfbc443136fbbe17fcff6e79b2fbc34c5f807becb5ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 18:37:04 compute-0 nova_compute[348325]: 2025-12-03 18:37:04.289 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:37:04 compute-0 nova_compute[348325]: 2025-12-03 18:37:04.318 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:37:04 compute-0 nova_compute[348325]: 2025-12-03 18:37:04.319 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:37:04 compute-0 nova_compute[348325]: 2025-12-03 18:37:04.319 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:37:04 compute-0 nova_compute[348325]: 2025-12-03 18:37:04.319 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:37:04 compute-0 nova_compute[348325]: 2025-12-03 18:37:04.319 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:37:04 compute-0 nova_compute[348325]: 2025-12-03 18:37:04.320 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:37:04 compute-0 nova_compute[348325]: 2025-12-03 18:37:04.320 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:37:04 compute-0 nova_compute[348325]: 2025-12-03 18:37:04.320 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:37:04 compute-0 nova_compute[348325]: 2025-12-03 18:37:04.320 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:37:04 compute-0 nova_compute[348325]: 2025-12-03 18:37:04.469 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:37:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:37:05 compute-0 nifty_merkle[412396]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:37:05 compute-0 nifty_merkle[412396]: --> relative data size: 1.0
Dec  3 18:37:05 compute-0 nifty_merkle[412396]: --> All data devices are unavailable
Dec  3 18:37:05 compute-0 systemd[1]: libpod-a97135b603701f083b69dfbc443136fbbe17fcff6e79b2fbc34c5f807becb5ed.scope: Deactivated successfully.
Dec  3 18:37:05 compute-0 systemd[1]: libpod-a97135b603701f083b69dfbc443136fbbe17fcff6e79b2fbc34c5f807becb5ed.scope: Consumed 1.304s CPU time.
Dec  3 18:37:05 compute-0 podman[412380]: 2025-12-03 18:37:05.223941024 +0000 UTC m=+1.608124222 container died a97135b603701f083b69dfbc443136fbbe17fcff6e79b2fbc34c5f807becb5ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d671712864854097ecfd2b163d682fa45fea53eec56ded393df8c82b0bbe2015-merged.mount: Deactivated successfully.
Dec  3 18:37:05 compute-0 podman[412380]: 2025-12-03 18:37:05.309832841 +0000 UTC m=+1.694016029 container remove a97135b603701f083b69dfbc443136fbbe17fcff6e79b2fbc34c5f807becb5ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:37:05 compute-0 systemd[1]: libpod-conmon-a97135b603701f083b69dfbc443136fbbe17fcff6e79b2fbc34c5f807becb5ed.scope: Deactivated successfully.
Dec  3 18:37:06 compute-0 podman[412573]: 2025-12-03 18:37:06.384280035 +0000 UTC m=+0.081913710 container create 1c0648f0295dbb9958f98928d12367ea39fac3d491c0274fef9c4ce8de0a371c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:37:06 compute-0 podman[412573]: 2025-12-03 18:37:06.355801451 +0000 UTC m=+0.053435196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:37:06 compute-0 systemd[1]: Started libpod-conmon-1c0648f0295dbb9958f98928d12367ea39fac3d491c0274fef9c4ce8de0a371c.scope.
Dec  3 18:37:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:37:06 compute-0 podman[412573]: 2025-12-03 18:37:06.516194435 +0000 UTC m=+0.213828130 container init 1c0648f0295dbb9958f98928d12367ea39fac3d491c0274fef9c4ce8de0a371c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:37:06 compute-0 podman[412573]: 2025-12-03 18:37:06.52664799 +0000 UTC m=+0.224281665 container start 1c0648f0295dbb9958f98928d12367ea39fac3d491c0274fef9c4ce8de0a371c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:37:06 compute-0 podman[412573]: 2025-12-03 18:37:06.532948784 +0000 UTC m=+0.230582479 container attach 1c0648f0295dbb9958f98928d12367ea39fac3d491c0274fef9c4ce8de0a371c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 18:37:06 compute-0 flamboyant_austin[412586]: 167 167
Dec  3 18:37:06 compute-0 systemd[1]: libpod-1c0648f0295dbb9958f98928d12367ea39fac3d491c0274fef9c4ce8de0a371c.scope: Deactivated successfully.
Dec  3 18:37:06 compute-0 podman[412591]: 2025-12-03 18:37:06.608345384 +0000 UTC m=+0.046599478 container died 1c0648f0295dbb9958f98928d12367ea39fac3d491c0274fef9c4ce8de0a371c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 18:37:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5d023ae3d813050b33163aa645bd52464987551e63937689df42e57b0196099-merged.mount: Deactivated successfully.
Dec  3 18:37:06 compute-0 podman[412591]: 2025-12-03 18:37:06.673816422 +0000 UTC m=+0.112070486 container remove 1c0648f0295dbb9958f98928d12367ea39fac3d491c0274fef9c4ce8de0a371c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:37:06 compute-0 systemd[1]: libpod-conmon-1c0648f0295dbb9958f98928d12367ea39fac3d491c0274fef9c4ce8de0a371c.scope: Deactivated successfully.
Dec  3 18:37:06 compute-0 podman[412612]: 2025-12-03 18:37:06.888649954 +0000 UTC m=+0.044481597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:37:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:37:07 compute-0 podman[412612]: 2025-12-03 18:37:07.044680213 +0000 UTC m=+0.200511776 container create 975110ebf27a8e4b173c66ac2b3bac23b0b0842c9eb8e64c24541dc833aa4416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:37:07 compute-0 systemd[1]: Started libpod-conmon-975110ebf27a8e4b173c66ac2b3bac23b0b0842c9eb8e64c24541dc833aa4416.scope.
Dec  3 18:37:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b5536c5f8a10c249c152d1be9ea9c5f698768081da7dcdd404632cd5ee5204/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b5536c5f8a10c249c152d1be9ea9c5f698768081da7dcdd404632cd5ee5204/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b5536c5f8a10c249c152d1be9ea9c5f698768081da7dcdd404632cd5ee5204/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b5536c5f8a10c249c152d1be9ea9c5f698768081da7dcdd404632cd5ee5204/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:07 compute-0 podman[412612]: 2025-12-03 18:37:07.197823891 +0000 UTC m=+0.353655464 container init 975110ebf27a8e4b173c66ac2b3bac23b0b0842c9eb8e64c24541dc833aa4416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hopper, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:37:07 compute-0 podman[412612]: 2025-12-03 18:37:07.220784571 +0000 UTC m=+0.376616134 container start 975110ebf27a8e4b173c66ac2b3bac23b0b0842c9eb8e64c24541dc833aa4416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hopper, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec  3 18:37:07 compute-0 podman[412612]: 2025-12-03 18:37:07.225910886 +0000 UTC m=+0.381742459 container attach 975110ebf27a8e4b173c66ac2b3bac23b0b0842c9eb8e64c24541dc833aa4416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:37:07 compute-0 nova_compute[348325]: 2025-12-03 18:37:07.365 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:07 compute-0 nova_compute[348325]: 2025-12-03 18:37:07.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:37:07 compute-0 nova_compute[348325]: 2025-12-03 18:37:07.560 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:37:07 compute-0 nova_compute[348325]: 2025-12-03 18:37:07.560 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:37:07 compute-0 nova_compute[348325]: 2025-12-03 18:37:07.560 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:37:07 compute-0 nova_compute[348325]: 2025-12-03 18:37:07.561 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:37:07 compute-0 nova_compute[348325]: 2025-12-03 18:37:07.562 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:37:08 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2258288041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:37:08 compute-0 nice_hopper[412628]: {
Dec  3 18:37:08 compute-0 nice_hopper[412628]:    "0": [
Dec  3 18:37:08 compute-0 nice_hopper[412628]:        {
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "devices": [
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "/dev/loop3"
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            ],
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_name": "ceph_lv0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_size": "21470642176",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "name": "ceph_lv0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "tags": {
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.cluster_name": "ceph",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.crush_device_class": "",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.encrypted": "0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.osd_id": "0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.type": "block",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.vdo": "0"
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            },
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "type": "block",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "vg_name": "ceph_vg0"
Dec  3 18:37:08 compute-0 nice_hopper[412628]:        }
Dec  3 18:37:08 compute-0 nice_hopper[412628]:    ],
Dec  3 18:37:08 compute-0 nice_hopper[412628]:    "1": [
Dec  3 18:37:08 compute-0 nice_hopper[412628]:        {
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "devices": [
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "/dev/loop4"
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            ],
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_name": "ceph_lv1",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_size": "21470642176",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "name": "ceph_lv1",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "tags": {
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.cluster_name": "ceph",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.crush_device_class": "",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.encrypted": "0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.osd_id": "1",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.type": "block",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.vdo": "0"
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            },
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "type": "block",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "vg_name": "ceph_vg1"
Dec  3 18:37:08 compute-0 nice_hopper[412628]:        }
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.138 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:08 compute-0 nice_hopper[412628]:    ],
Dec  3 18:37:08 compute-0 nice_hopper[412628]:    "2": [
Dec  3 18:37:08 compute-0 nice_hopper[412628]:        {
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "devices": [
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "/dev/loop5"
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            ],
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_name": "ceph_lv2",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_size": "21470642176",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "name": "ceph_lv2",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "tags": {
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.cluster_name": "ceph",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.crush_device_class": "",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.encrypted": "0",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.osd_id": "2",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.type": "block",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:                "ceph.vdo": "0"
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            },
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "type": "block",
Dec  3 18:37:08 compute-0 nice_hopper[412628]:            "vg_name": "ceph_vg2"
Dec  3 18:37:08 compute-0 nice_hopper[412628]:        }
Dec  3 18:37:08 compute-0 nice_hopper[412628]:    ]
Dec  3 18:37:08 compute-0 nice_hopper[412628]: }
Dec  3 18:37:08 compute-0 systemd[1]: libpod-975110ebf27a8e4b173c66ac2b3bac23b0b0842c9eb8e64c24541dc833aa4416.scope: Deactivated successfully.
Dec  3 18:37:08 compute-0 conmon[412628]: conmon 975110ebf27a8e4b173c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-975110ebf27a8e4b173c66ac2b3bac23b0b0842c9eb8e64c24541dc833aa4416.scope/container/memory.events
Dec  3 18:37:08 compute-0 podman[412612]: 2025-12-03 18:37:08.180054776 +0000 UTC m=+1.335886329 container died 975110ebf27a8e4b173c66ac2b3bac23b0b0842c9eb8e64c24541dc833aa4416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hopper, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-82b5536c5f8a10c249c152d1be9ea9c5f698768081da7dcdd404632cd5ee5204-merged.mount: Deactivated successfully.
Dec  3 18:37:08 compute-0 podman[412612]: 2025-12-03 18:37:08.250597137 +0000 UTC m=+1.406428690 container remove 975110ebf27a8e4b173c66ac2b3bac23b0b0842c9eb8e64c24541dc833aa4416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hopper, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 18:37:08 compute-0 systemd[1]: libpod-conmon-975110ebf27a8e4b173c66ac2b3bac23b0b0842c9eb8e64c24541dc833aa4416.scope: Deactivated successfully.
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.272 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.272 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.272 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:37:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.669 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.670 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4042MB free_disk=59.97224044799805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.671 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.671 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.767 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.768 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.769 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:37:08 compute-0 ovn_controller[89305]: 2025-12-03T18:37:08Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ea:1b:25 192.168.0.128
Dec  3 18:37:08 compute-0 ovn_controller[89305]: 2025-12-03T18:37:08Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:1b:25 192.168.0.128
Dec  3 18:37:08 compute-0 nova_compute[348325]: 2025-12-03 18:37:08.799 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 2 op/s
Dec  3 18:37:09 compute-0 podman[412828]: 2025-12-03 18:37:09.197291414 +0000 UTC m=+0.065724135 container create f5dcb68c2d51fc2a843338e17ce49e9da9af93aaf7c5b067caba0e3e6644c071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_swanson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 18:37:09 compute-0 systemd[1]: Started libpod-conmon-f5dcb68c2d51fc2a843338e17ce49e9da9af93aaf7c5b067caba0e3e6644c071.scope.
Dec  3 18:37:09 compute-0 podman[412828]: 2025-12-03 18:37:09.173004802 +0000 UTC m=+0.041437553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:37:09 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:37:09 compute-0 podman[412828]: 2025-12-03 18:37:09.310357085 +0000 UTC m=+0.178789806 container init f5dcb68c2d51fc2a843338e17ce49e9da9af93aaf7c5b067caba0e3e6644c071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:37:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:37:09 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701981996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:37:09 compute-0 podman[412828]: 2025-12-03 18:37:09.321437485 +0000 UTC m=+0.189870186 container start f5dcb68c2d51fc2a843338e17ce49e9da9af93aaf7c5b067caba0e3e6644c071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_swanson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:37:09 compute-0 podman[412828]: 2025-12-03 18:37:09.325915394 +0000 UTC m=+0.194348125 container attach f5dcb68c2d51fc2a843338e17ce49e9da9af93aaf7c5b067caba0e3e6644c071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_swanson, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 18:37:09 compute-0 tender_swanson[412844]: 167 167
Dec  3 18:37:09 compute-0 systemd[1]: libpod-f5dcb68c2d51fc2a843338e17ce49e9da9af93aaf7c5b067caba0e3e6644c071.scope: Deactivated successfully.
Dec  3 18:37:09 compute-0 conmon[412844]: conmon f5dcb68c2d51fc2a8433 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5dcb68c2d51fc2a843338e17ce49e9da9af93aaf7c5b067caba0e3e6644c071.scope/container/memory.events
Dec  3 18:37:09 compute-0 podman[412828]: 2025-12-03 18:37:09.331745206 +0000 UTC m=+0.200177937 container died f5dcb68c2d51fc2a843338e17ce49e9da9af93aaf7c5b067caba0e3e6644c071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_swanson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 18:37:09 compute-0 nova_compute[348325]: 2025-12-03 18:37:09.350 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:09 compute-0 nova_compute[348325]: 2025-12-03 18:37:09.360 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 18:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11bb4311d3ccee2c57ce31766bc58185c19f33c8a2e084037b0b583ea1633a0-merged.mount: Deactivated successfully.
Dec  3 18:37:09 compute-0 podman[412828]: 2025-12-03 18:37:09.385075728 +0000 UTC m=+0.253508439 container remove f5dcb68c2d51fc2a843338e17ce49e9da9af93aaf7c5b067caba0e3e6644c071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:37:09 compute-0 systemd[1]: libpod-conmon-f5dcb68c2d51fc2a843338e17ce49e9da9af93aaf7c5b067caba0e3e6644c071.scope: Deactivated successfully.
Dec  3 18:37:09 compute-0 nova_compute[348325]: 2025-12-03 18:37:09.415 348329 ERROR nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [req-9c127459-9e9e-4106-b353-142c9b0f97b5] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 00cd1895-22aa-49c6-bdb2-0991af662704.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-9c127459-9e9e-4106-b353-142c9b0f97b5"}]}#033[00m
Dec  3 18:37:09 compute-0 nova_compute[348325]: 2025-12-03 18:37:09.440 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing inventories for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 18:37:09 compute-0 nova_compute[348325]: 2025-12-03 18:37:09.477 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating ProviderTree inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 18:37:09 compute-0 nova_compute[348325]: 2025-12-03 18:37:09.478 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 18:37:09 compute-0 nova_compute[348325]: 2025-12-03 18:37:09.506 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing aggregate associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 18:37:09 compute-0 nova_compute[348325]: 2025-12-03 18:37:09.536 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing trait associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, traits: COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_AVX2,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 18:37:09 compute-0 nova_compute[348325]: 2025-12-03 18:37:09.576 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:09 compute-0 podman[412869]: 2025-12-03 18:37:09.662807557 +0000 UTC m=+0.091116926 container create 3704ecc5de7c23c507d3afb91fd98002c7bbf90c05e7a4a361b5ec05c083ee6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 18:37:09 compute-0 podman[412869]: 2025-12-03 18:37:09.630780555 +0000 UTC m=+0.059090014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:37:09 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  3 18:37:09 compute-0 systemd[1]: Started libpod-conmon-3704ecc5de7c23c507d3afb91fd98002c7bbf90c05e7a4a361b5ec05c083ee6d.scope.
Dec  3 18:37:09 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3223e48a47f8ed70210b9cce4b18bbaae9d5e73b8ea11b012b04960cd1cffd79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3223e48a47f8ed70210b9cce4b18bbaae9d5e73b8ea11b012b04960cd1cffd79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3223e48a47f8ed70210b9cce4b18bbaae9d5e73b8ea11b012b04960cd1cffd79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3223e48a47f8ed70210b9cce4b18bbaae9d5e73b8ea11b012b04960cd1cffd79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:37:09 compute-0 podman[412869]: 2025-12-03 18:37:09.822861294 +0000 UTC m=+0.251170663 container init 3704ecc5de7c23c507d3afb91fd98002c7bbf90c05e7a4a361b5ec05c083ee6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:37:09 compute-0 podman[412869]: 2025-12-03 18:37:09.838826113 +0000 UTC m=+0.267135482 container start 3704ecc5de7c23c507d3afb91fd98002c7bbf90c05e7a4a361b5ec05c083ee6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:37:09 compute-0 podman[412869]: 2025-12-03 18:37:09.843508148 +0000 UTC m=+0.271817607 container attach 3704ecc5de7c23c507d3afb91fd98002c7bbf90c05e7a4a361b5ec05c083ee6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dewdney, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:37:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:37:10 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720277002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:37:10 compute-0 nova_compute[348325]: 2025-12-03 18:37:10.106 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:10 compute-0 nova_compute[348325]: 2025-12-03 18:37:10.119 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 18:37:10 compute-0 nova_compute[348325]: 2025-12-03 18:37:10.163 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updated inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  3 18:37:10 compute-0 nova_compute[348325]: 2025-12-03 18:37:10.163 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  3 18:37:10 compute-0 nova_compute[348325]: 2025-12-03 18:37:10.164 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 18:37:10 compute-0 nova_compute[348325]: 2025-12-03 18:37:10.190 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:37:10 compute-0 nova_compute[348325]: 2025-12-03 18:37:10.191 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:37:10 compute-0 podman[412931]: 2025-12-03 18:37:10.930860558 +0000 UTC m=+0.092843027 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 18:37:10 compute-0 podman[412932]: 2025-12-03 18:37:10.95636732 +0000 UTC m=+0.115691934 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]: {
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "osd_id": 1,
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "type": "bluestore"
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:    },
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "osd_id": 2,
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "type": "bluestore"
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:    },
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "osd_id": 0,
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:        "type": "bluestore"
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]:    }
Dec  3 18:37:10 compute-0 elegant_dewdney[412904]: }
Dec  3 18:37:10 compute-0 podman[412933]: 2025-12-03 18:37:10.969649385 +0000 UTC m=+0.122229864 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git)
Dec  3 18:37:10 compute-0 systemd[1]: libpod-3704ecc5de7c23c507d3afb91fd98002c7bbf90c05e7a4a361b5ec05c083ee6d.scope: Deactivated successfully.
Dec  3 18:37:10 compute-0 podman[412869]: 2025-12-03 18:37:10.990707169 +0000 UTC m=+1.419016568 container died 3704ecc5de7c23c507d3afb91fd98002c7bbf90c05e7a4a361b5ec05c083ee6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:37:10 compute-0 systemd[1]: libpod-3704ecc5de7c23c507d3afb91fd98002c7bbf90c05e7a4a361b5ec05c083ee6d.scope: Consumed 1.142s CPU time.
Dec  3 18:37:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 53 MiB data, 184 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 270 KiB/s wr, 20 op/s
Dec  3 18:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3223e48a47f8ed70210b9cce4b18bbaae9d5e73b8ea11b012b04960cd1cffd79-merged.mount: Deactivated successfully.
Dec  3 18:37:11 compute-0 podman[412869]: 2025-12-03 18:37:11.074791501 +0000 UTC m=+1.503100870 container remove 3704ecc5de7c23c507d3afb91fd98002c7bbf90c05e7a4a361b5ec05c083ee6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dewdney, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:37:11 compute-0 systemd[1]: libpod-conmon-3704ecc5de7c23c507d3afb91fd98002c7bbf90c05e7a4a361b5ec05c083ee6d.scope: Deactivated successfully.
Dec  3 18:37:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:37:11 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:37:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:37:11 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:37:11 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3b2ce170-cfc6-409a-b2a4-8a0fba1b4cf2 does not exist
Dec  3 18:37:11 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6a8f9ead-8b71-475d-bbd8-33976571a090 does not exist
Dec  3 18:37:12 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:37:12 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:37:12 compute-0 nova_compute[348325]: 2025-12-03 18:37:12.367 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:12 compute-0 nova_compute[348325]: 2025-12-03 18:37:12.370 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 74 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 1.5 MiB/s wr, 52 op/s
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.246 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.247 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.247 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.248 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.256 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1ca1fbdb-089c-4544-821e-0542089b8424 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 18:37:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:13.637 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1ca1fbdb-089c-4544-821e-0542089b8424 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 18:37:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:37:13
Dec  3 18:37:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:37:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:37:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'volumes', 'images', 'backups', 'vms', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec  3 18:37:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:37:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:37:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:37:13 compute-0 podman[413065]: 2025-12-03 18:37:13.945955908 +0000 UTC m=+0.104021250 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:37:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:37:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:37:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:37:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.107 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Wed, 03 Dec 2025 18:37:13 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ccb3322a-8411-4c3a-9e37-1b94bbfe76f7 x-openstack-request-id: req-ccb3322a-8411-4c3a-9e37-1b94bbfe76f7 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.108 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1ca1fbdb-089c-4544-821e-0542089b8424", "name": "test_0", "status": "ACTIVE", "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "user_id": "56338958b09445f5af9aa9e4601a1a8a", "metadata": {}, "hostId": "233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878", "image": {"id": "e68cd467-b4e6-45e0-8e55-984fda402294", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e68cd467-b4e6-45e0-8e55-984fda402294"}]}, "flavor": {"id": "6cb250a4-d28c-4125-888b-653b31e29275", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/6cb250a4-d28c-4125-888b-653b31e29275"}]}, "created": "2025-12-03T18:36:14Z", "updated": "2025-12-03T18:36:32Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.128", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ea:1b:25"}, {"version": 4, "addr": "192.168.122.225", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ea:1b:25"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1ca1fbdb-089c-4544-821e-0542089b8424"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1ca1fbdb-089c-4544-821e-0542089b8424"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T18:36:32.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.108 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1ca1fbdb-089c-4544-821e-0542089b8424 used request id req-ccb3322a-8411-4c3a-9e37-1b94bbfe76f7 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 18:37:14 compute-0 ovn_controller[89305]: 2025-12-03T18:37:14Z|00034|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.111 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1ca1fbdb-089c-4544-821e-0542089b8424', 'name': 'test_0', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.111 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T18:37:14.112269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.122 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1ca1fbdb-089c-4544-821e-0542089b8424 / tap3d8505a1-5c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.122 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.125 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.126 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes volume: 1512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T18:37:14.125816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.127 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.127 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.128 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T18:37:14.128003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.129 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.129 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.130 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T18:37:14.130033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.132 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.133 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T18:37:14.132709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.133 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.136 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.136 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.137 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.137 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.137 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.137 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes volume: 1884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.138 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.138 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.139 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T18:37:14.137359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.139 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.140 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.141 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T18:37:14.140112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.173 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.174 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.175 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.177 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.177 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.177 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.178 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T18:37:14.177593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.179 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T18:37:14.179992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.229 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/memory.usage volume: 33.31640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.230 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.230 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.231 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.231 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.231 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T18:37:14.231432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.232 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.232 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.233 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.233 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.233 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.233 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.234 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.234 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.235 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.235 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.236 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.236 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T18:37:14.233892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.236 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.237 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.237 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.237 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T18:37:14.237165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:37:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:37:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:37:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:37:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:37:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:37:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:37:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:37:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:37:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.351 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.352 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.352 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.353 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.355 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.355 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.355 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.355 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.355 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T18:37:14.355414) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.356 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.356 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.356 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.356 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.356 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 1682579508 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.357 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 260360075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.357 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 147233249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.358 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T18:37:14.356804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.359 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T18:37:14.359122) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.359 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.360 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.360 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.360 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.361 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T18:37:14.360984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.361 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.361 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.362 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.363 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 41590784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.363 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.363 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.364 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.365 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T18:37:14.362955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.365 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T18:37:14.365100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.366 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.366 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 5974847482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.366 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 23959545 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.367 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T18:37:14.366329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.367 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.368 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.368 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 217 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T18:37:14.368524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.369 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.369 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.370 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.370 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.370 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.370 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.370 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.370 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.371 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T18:37:14.370559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.371 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.371 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.372 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.373 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.373 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.373 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.373 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.373 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.374 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T18:37:14.372058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.374 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T18:37:14.373564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.374 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.375 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.375 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T18:37:14.374980) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.376 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T18:37:14.376576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.377 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.377 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.377 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.377 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.377 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.378 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/cpu volume: 34810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.378 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.383 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.383 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.384 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T18:37:14.377895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.384 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.396 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.396 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.396 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.396 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.396 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:37:14.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:37:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 77 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec  3 18:37:15 compute-0 podman[413087]: 2025-12-03 18:37:15.935969242 +0000 UTC m=+0.087750074 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 18:37:15 compute-0 podman[413086]: 2025-12-03 18:37:15.947939453 +0000 UTC m=+0.105978497 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30)
Dec  3 18:37:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 77 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec  3 18:37:17 compute-0 nova_compute[348325]: 2025-12-03 18:37:17.371 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec  3 18:37:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 1.5 MiB/s wr, 54 op/s
Dec  3 18:37:22 compute-0 nova_compute[348325]: 2025-12-03 18:37:22.374 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 1.2 MiB/s wr, 35 op/s
Dec  3 18:37:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:23.333 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:37:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:23.334 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:37:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:23.335 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:37:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:23 compute-0 podman[413126]: 2025-12-03 18:37:23.922702225 +0000 UTC m=+0.086458002 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005514586182197044 of space, bias 1.0, pg target 0.1654375854659113 quantized to 32 (current 32)
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:37:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:37:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 25 KiB/s wr, 4 op/s
Dec  3 18:37:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  3 18:37:27 compute-0 nova_compute[348325]: 2025-12-03 18:37:27.376 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:27 compute-0 podman[413155]: 2025-12-03 18:37:27.98401508 +0000 UTC m=+0.134226517 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm)
Dec  3 18:37:28 compute-0 podman[413154]: 2025-12-03 18:37:28.03812823 +0000 UTC m=+0.194502527 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 18:37:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 0 op/s
Dec  3 18:37:29 compute-0 podman[158200]: time="2025-12-03T18:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:37:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:37:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8623 "" "Go-http-client/1.1"
Dec  3 18:37:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:37:31 compute-0 openstack_network_exporter[365222]: ERROR   18:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:37:31 compute-0 openstack_network_exporter[365222]: ERROR   18:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:37:31 compute-0 openstack_network_exporter[365222]: ERROR   18:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:37:31 compute-0 openstack_network_exporter[365222]: ERROR   18:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:37:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:37:31 compute-0 openstack_network_exporter[365222]: ERROR   18:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:37:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:37:32 compute-0 nova_compute[348325]: 2025-12-03 18:37:32.379 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:37:32 compute-0 nova_compute[348325]: 2025-12-03 18:37:32.380 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:32 compute-0 nova_compute[348325]: 2025-12-03 18:37:32.381 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  3 18:37:32 compute-0 nova_compute[348325]: 2025-12-03 18:37:32.381 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  3 18:37:32 compute-0 nova_compute[348325]: 2025-12-03 18:37:32.382 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  3 18:37:32 compute-0 nova_compute[348325]: 2025-12-03 18:37:32.384 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:37:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:37:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:37:37 compute-0 nova_compute[348325]: 2025-12-03 18:37:37.384 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:37:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1818230788' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:37:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:37:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1818230788' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:37:38 compute-0 nova_compute[348325]: 2025-12-03 18:37:38.422 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "df72d527-943e-4e8c-b62a-63afa5f18261" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:37:38 compute-0 nova_compute[348325]: 2025-12-03 18:37:38.423 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:37:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:38 compute-0 nova_compute[348325]: 2025-12-03 18:37:38.451 348329 DEBUG nova.compute.manager [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:37:38 compute-0 nova_compute[348325]: 2025-12-03 18:37:38.571 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:37:38 compute-0 nova_compute[348325]: 2025-12-03 18:37:38.572 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:37:38 compute-0 nova_compute[348325]: 2025-12-03 18:37:38.587 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:37:38 compute-0 nova_compute[348325]: 2025-12-03 18:37:38.588 348329 INFO nova.compute.claims [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:37:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.178 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:37:39 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3903227394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.662 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.676 348329 DEBUG nova.compute.provider_tree [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.698 348329 DEBUG nova.scheduler.client.report [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.723 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.724 348329 DEBUG nova.compute.manager [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.769 348329 DEBUG nova.compute.manager [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.770 348329 DEBUG nova.network.neutron [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.793 348329 INFO nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.823 348329 DEBUG nova.compute.manager [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.948 348329 DEBUG nova.compute.manager [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.950 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:37:39 compute-0 nova_compute[348325]: 2025-12-03 18:37:39.952 348329 INFO nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Creating image(s)#033[00m
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.018 348329 DEBUG nova.storage.rbd_utils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image df72d527-943e-4e8c-b62a-63afa5f18261_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:37:40 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:37:40 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.089 348329 DEBUG nova.storage.rbd_utils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image df72d527-943e-4e8c-b62a-63afa5f18261_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.138 348329 DEBUG nova.storage.rbd_utils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image df72d527-943e-4e8c-b62a-63afa5f18261_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.148 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.265 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.267 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "2a1fd6462a2f789b92c02c5037b663e095546067" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.268 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "2a1fd6462a2f789b92c02c5037b663e095546067" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.268 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "2a1fd6462a2f789b92c02c5037b663e095546067" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.320 348329 DEBUG nova.storage.rbd_utils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image df72d527-943e-4e8c-b62a-63afa5f18261_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.331 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 df72d527-943e-4e8c-b62a-63afa5f18261_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.806 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 df72d527-943e-4e8c-b62a-63afa5f18261_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:40 compute-0 nova_compute[348325]: 2025-12-03 18:37:40.936 348329 DEBUG nova.storage.rbd_utils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] resizing rbd image df72d527-943e-4e8c-b62a-63afa5f18261_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:37:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:37:41 compute-0 nova_compute[348325]: 2025-12-03 18:37:41.163 348329 DEBUG nova.objects.instance [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'migration_context' on Instance uuid df72d527-943e-4e8c-b62a-63afa5f18261 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:37:41 compute-0 nova_compute[348325]: 2025-12-03 18:37:41.288 348329 DEBUG nova.storage.rbd_utils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image df72d527-943e-4e8c-b62a-63afa5f18261_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:37:41 compute-0 nova_compute[348325]: 2025-12-03 18:37:41.346 348329 DEBUG nova.storage.rbd_utils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image df72d527-943e-4e8c-b62a-63afa5f18261_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:37:41 compute-0 nova_compute[348325]: 2025-12-03 18:37:41.357 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:41 compute-0 nova_compute[348325]: 2025-12-03 18:37:41.453 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:41 compute-0 nova_compute[348325]: 2025-12-03 18:37:41.454 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:37:41 compute-0 nova_compute[348325]: 2025-12-03 18:37:41.455 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:37:41 compute-0 nova_compute[348325]: 2025-12-03 18:37:41.456 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:37:41 compute-0 nova_compute[348325]: 2025-12-03 18:37:41.497 348329 DEBUG nova.storage.rbd_utils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image df72d527-943e-4e8c-b62a-63afa5f18261_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:37:41 compute-0 nova_compute[348325]: 2025-12-03 18:37:41.507 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 df72d527-943e-4e8c-b62a-63afa5f18261_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:41 compute-0 podman[413466]: 2025-12-03 18:37:41.950982333 +0000 UTC m=+0.109406112 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:37:41 compute-0 podman[413468]: 2025-12-03 18:37:41.966268706 +0000 UTC m=+0.115898760 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, distribution-scope=public)
Dec  3 18:37:41 compute-0 podman[413467]: 2025-12-03 18:37:41.974381494 +0000 UTC m=+0.128964839 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:37:42 compute-0 nova_compute[348325]: 2025-12-03 18:37:42.388 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:42 compute-0 nova_compute[348325]: 2025-12-03 18:37:42.445 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 df72d527-943e-4e8c-b62a-63afa5f18261_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.938s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:42 compute-0 nova_compute[348325]: 2025-12-03 18:37:42.676 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:37:42 compute-0 nova_compute[348325]: 2025-12-03 18:37:42.678 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Ensure instance console log exists: /var/lib/nova/instances/df72d527-943e-4e8c-b62a-63afa5f18261/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:37:42 compute-0 nova_compute[348325]: 2025-12-03 18:37:42.679 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:37:42 compute-0 nova_compute[348325]: 2025-12-03 18:37:42.680 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:37:42 compute-0 nova_compute[348325]: 2025-12-03 18:37:42.680 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:37:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 100 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 800 KiB/s wr, 14 op/s
Dec  3 18:37:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:43 compute-0 nova_compute[348325]: 2025-12-03 18:37:43.585 348329 DEBUG nova.network.neutron [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Successfully updated port: 03bf6208-f40b-4534-a297-122588172fa5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 18:37:43 compute-0 nova_compute[348325]: 2025-12-03 18:37:43.603 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:37:43 compute-0 nova_compute[348325]: 2025-12-03 18:37:43.604 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquired lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:37:43 compute-0 nova_compute[348325]: 2025-12-03 18:37:43.604 348329 DEBUG nova.network.neutron [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:37:43 compute-0 nova_compute[348325]: 2025-12-03 18:37:43.681 348329 DEBUG nova.compute.manager [req-0d02b174-866f-4be5-b204-6c9e93adc3e5 req-f679b7d6-80a5-42d8-9dfd-5bea8ec61d96 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Received event network-changed-03bf6208-f40b-4534-a297-122588172fa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:37:43 compute-0 nova_compute[348325]: 2025-12-03 18:37:43.683 348329 DEBUG nova.compute.manager [req-0d02b174-866f-4be5-b204-6c9e93adc3e5 req-f679b7d6-80a5-42d8-9dfd-5bea8ec61d96 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Refreshing instance network info cache due to event network-changed-03bf6208-f40b-4534-a297-122588172fa5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:37:43 compute-0 nova_compute[348325]: 2025-12-03 18:37:43.683 348329 DEBUG oslo_concurrency.lockutils [req-0d02b174-866f-4be5-b204-6c9e93adc3e5 req-f679b7d6-80a5-42d8-9dfd-5bea8ec61d96 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:37:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:37:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:37:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:37:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:37:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:37:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:37:43 compute-0 nova_compute[348325]: 2025-12-03 18:37:43.997 348329 DEBUG nova.network.neutron [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:37:44 compute-0 podman[413581]: 2025-12-03 18:37:44.829055511 +0000 UTC m=+0.118793791 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  3 18:37:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 108 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 29 op/s
Dec  3 18:37:45 compute-0 nova_compute[348325]: 2025-12-03 18:37:45.340 348329 DEBUG nova.network.neutron [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updating instance_info_cache with network_info: [{"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:37:45 compute-0 nova_compute[348325]: 2025-12-03 18:37:45.990 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Releasing lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:37:45 compute-0 nova_compute[348325]: 2025-12-03 18:37:45.990 348329 DEBUG nova.compute.manager [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Instance network_info: |[{"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 18:37:45 compute-0 nova_compute[348325]: 2025-12-03 18:37:45.992 348329 DEBUG oslo_concurrency.lockutils [req-0d02b174-866f-4be5-b204-6c9e93adc3e5 req-f679b7d6-80a5-42d8-9dfd-5bea8ec61d96 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:37:45 compute-0 nova_compute[348325]: 2025-12-03 18:37:45.992 348329 DEBUG nova.network.neutron [req-0d02b174-866f-4be5-b204-6c9e93adc3e5 req-f679b7d6-80a5-42d8-9dfd-5bea8ec61d96 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Refreshing network info cache for port 03bf6208-f40b-4534-a297-122588172fa5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:37:45 compute-0 nova_compute[348325]: 2025-12-03 18:37:45.998 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Start _get_guest_xml network_info=[{"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T18:35:07Z,direct_url=<?>,disk_format='qcow2',id=e68cd467-b4e6-45e0-8e55-984fda402294,min_disk=0,min_ram=0,name='cirros',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T18:35:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.010 348329 WARNING nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.022 348329 DEBUG nova.virt.libvirt.host [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.022 348329 DEBUG nova.virt.libvirt.host [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.028 348329 DEBUG nova.virt.libvirt.host [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.029 348329 DEBUG nova.virt.libvirt.host [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.030 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.030 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:35:14Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='6cb250a4-d28c-4125-888b-653b31e29275',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T18:35:07Z,direct_url=<?>,disk_format='qcow2',id=e68cd467-b4e6-45e0-8e55-984fda402294,min_disk=0,min_ram=0,name='cirros',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T18:35:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.031 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.031 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.032 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.032 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.033 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.033 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.034 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.034 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.035 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.035 348329 DEBUG nova.virt.hardware [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.040 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:37:46 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/326855210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.584 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:46 compute-0 nova_compute[348325]: 2025-12-03 18:37:46.585 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:46 compute-0 podman[413642]: 2025-12-03 18:37:46.927540051 +0000 UTC m=+0.085172880 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 18:37:46 compute-0 podman[413641]: 2025-12-03 18:37:46.941219164 +0000 UTC m=+0.109897323 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, distribution-scope=public, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4)
Dec  3 18:37:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Dec  3 18:37:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:37:47 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386100080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.089 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.150 348329 DEBUG nova.storage.rbd_utils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image df72d527-943e-4e8c-b62a-63afa5f18261_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.160 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.319 348329 DEBUG nova.network.neutron [req-0d02b174-866f-4be5-b204-6c9e93adc3e5 req-f679b7d6-80a5-42d8-9dfd-5bea8ec61d96 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updated VIF entry in instance network info cache for port 03bf6208-f40b-4534-a297-122588172fa5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.320 348329 DEBUG nova.network.neutron [req-0d02b174-866f-4be5-b204-6c9e93adc3e5 req-f679b7d6-80a5-42d8-9dfd-5bea8ec61d96 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updating instance_info_cache with network_info: [{"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.380 348329 DEBUG oslo_concurrency.lockutils [req-0d02b174-866f-4be5-b204-6c9e93adc3e5 req-f679b7d6-80a5-42d8-9dfd-5bea8ec61d96 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.391 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:37:47 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637480685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.681 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.683 348329 DEBUG nova.virt.libvirt.vif [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:37:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5',id=2,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b322e118-e1cc-40be-8d8c-553648144092'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-ben8kdr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:37:39Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM3ODUyMjc3NDE3MTY5NTY4NDc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzc4NTIyNzc0MTcxNjk1Njg0Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM3ODUyMjc3NDE3MTY5NTY4NDc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  3 18:37:47 compute-0 nova_compute[348325]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mzc4NTIyNzc0MTcxNjk1Njg0Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM3ODUyMjc3NDE3MTY5NTY4NDc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0tLQo=',user_id='56338958b09445f5af9aa9e4601a1a8a',uuid=df72d527-943e-4e8c-b62a-63afa5f18261,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.683 348329 DEBUG nova.network.os_vif_util [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.684 348329 DEBUG nova.network.os_vif_util [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:ba:29,bridge_name='br-int',has_traffic_filtering=True,id=03bf6208-f40b-4534-a297-122588172fa5,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap03bf6208-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.687 348329 DEBUG nova.objects.instance [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'pci_devices' on Instance uuid df72d527-943e-4e8c-b62a-63afa5f18261 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.711 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:37:47 compute-0 nova_compute[348325]:  <uuid>df72d527-943e-4e8c-b62a-63afa5f18261</uuid>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  <name>instance-00000002</name>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  <memory>524288</memory>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <nova:name>vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5</nova:name>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:37:46</nova:creationTime>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <nova:flavor name="m1.small">
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <nova:memory>512</nova:memory>
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <nova:ephemeral>1</nova:ephemeral>
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <nova:user uuid="56338958b09445f5af9aa9e4601a1a8a">admin</nova:user>
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <nova:project uuid="d2770200bdb2436c90142fa2e5ddcd47">admin</nova:project>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="e68cd467-b4e6-45e0-8e55-984fda402294"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <nova:port uuid="03bf6208-f40b-4534-a297-122588172fa5">
Dec  3 18:37:47 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="192.168.0.170" ipVersion="4"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <system>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <entry name="serial">df72d527-943e-4e8c-b62a-63afa5f18261</entry>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <entry name="uuid">df72d527-943e-4e8c-b62a-63afa5f18261</entry>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    </system>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  <os>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  </os>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  <features>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  </features>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/df72d527-943e-4e8c-b62a-63afa5f18261_disk">
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      </source>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/df72d527-943e-4e8c-b62a-63afa5f18261_disk.eph0">
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      </source>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <target dev="vdb" bus="virtio"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/df72d527-943e-4e8c-b62a-63afa5f18261_disk.config">
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      </source>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:37:47 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:41:ba:29"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <target dev="tap03bf6208-f4"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/df72d527-943e-4e8c-b62a-63afa5f18261/console.log" append="off"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <video>
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    </video>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:37:47 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:37:47 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:37:47 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:37:47 compute-0 nova_compute[348325]: </domain>
Dec  3 18:37:47 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.713 348329 DEBUG nova.compute.manager [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Preparing to wait for external event network-vif-plugged-03bf6208-f40b-4534-a297-122588172fa5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.713 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.714 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.715 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.716 348329 DEBUG nova.virt.libvirt.vif [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:37:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5',id=2,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b322e118-e1cc-40be-8d8c-553648144092'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-ben8kdr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:37:39Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM3ODUyMjc3NDE3MTY5NTY4NDc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzc4NTIyNzc0MTcxNjk1Njg0Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM3ODUyMjc3NDE3MTY5NTY4NDc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  3 18:37:47 compute-0 nova_compute[348325]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mzc4NTIyNzc0MTcxNjk1Njg0Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM3ODUyMjc3NDE3MTY5NTY4NDc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0tLQo=',user_id='56338958b09445f5af9aa9e4601a1a8a',uuid=df72d527-943e-4e8c-b62a-63afa5f18261,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.718 348329 DEBUG nova.network.os_vif_util [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.720 348329 DEBUG nova.network.os_vif_util [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:ba:29,bridge_name='br-int',has_traffic_filtering=True,id=03bf6208-f40b-4534-a297-122588172fa5,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap03bf6208-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.721 348329 DEBUG os_vif [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:ba:29,bridge_name='br-int',has_traffic_filtering=True,id=03bf6208-f40b-4534-a297-122588172fa5,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap03bf6208-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.723 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.724 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.725 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.736 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.737 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03bf6208-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.738 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap03bf6208-f4, col_values=(('external_ids', {'iface-id': '03bf6208-f40b-4534-a297-122588172fa5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:ba:29', 'vm-uuid': 'df72d527-943e-4e8c-b62a-63afa5f18261'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.741 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:47 compute-0 NetworkManager[49087]: <info>  [1764787067.7436] manager: (tap03bf6208-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.746 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.756 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.757 348329 INFO os_vif [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:ba:29,bridge_name='br-int',has_traffic_filtering=True,id=03bf6208-f40b-4534-a297-122588172fa5,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap03bf6208-f4')#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.941 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.942 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.943 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.943 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No VIF found with MAC fa:16:3e:41:ba:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 18:37:47 compute-0 rsyslogd[188590]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 18:37:47.683 348329 DEBUG nova.virt.libvirt.vif [None req-9df6a4ec-52 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.944 348329 INFO nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Using config drive#033[00m
Dec  3 18:37:47 compute-0 rsyslogd[188590]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 18:37:47.716 348329 DEBUG nova.virt.libvirt.vif [None req-9df6a4ec-52 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 18:37:47 compute-0 nova_compute[348325]: 2025-12-03 18:37:47.998 348329 DEBUG nova.storage.rbd_utils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image df72d527-943e-4e8c-b62a-63afa5f18261_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:37:48 compute-0 nova_compute[348325]: 2025-12-03 18:37:48.449 348329 INFO nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Creating config drive at /var/lib/nova/instances/df72d527-943e-4e8c-b62a-63afa5f18261/disk.config#033[00m
Dec  3 18:37:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:48 compute-0 nova_compute[348325]: 2025-12-03 18:37:48.462 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df72d527-943e-4e8c-b62a-63afa5f18261/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz_njrm3c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.463550) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787068463650, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1281, "num_deletes": 508, "total_data_size": 1428637, "memory_usage": 1454048, "flush_reason": "Manual Compaction"}
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787068478150, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1111672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22981, "largest_seqno": 24261, "table_properties": {"data_size": 1106561, "index_size": 2059, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15032, "raw_average_key_size": 19, "raw_value_size": 1093812, "raw_average_value_size": 1382, "num_data_blocks": 93, "num_entries": 791, "num_filter_entries": 791, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764786977, "oldest_key_time": 1764786977, "file_creation_time": 1764787068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 14709 microseconds, and 8069 cpu microseconds.
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.478257) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1111672 bytes OK
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.478285) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.481746) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.481771) EVENT_LOG_v1 {"time_micros": 1764787068481764, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.481797) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1421742, prev total WAL file size 1421742, number of live WAL files 2.
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.483161) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1085KB)], [53(8974KB)]
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787068483344, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10301315, "oldest_snapshot_seqno": -1}
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4496 keys, 7207685 bytes, temperature: kUnknown
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787068567691, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7207685, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7177682, "index_size": 17669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 112591, "raw_average_key_size": 25, "raw_value_size": 7096253, "raw_average_value_size": 1578, "num_data_blocks": 737, "num_entries": 4496, "num_filter_entries": 4496, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764787068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.568054) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7207685 bytes
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.570791) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.0 rd, 85.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.8 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(15.8) write-amplify(6.5) OK, records in: 5506, records dropped: 1010 output_compression: NoCompression
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.570877) EVENT_LOG_v1 {"time_micros": 1764787068570808, "job": 28, "event": "compaction_finished", "compaction_time_micros": 84455, "compaction_time_cpu_micros": 51491, "output_level": 6, "num_output_files": 1, "total_output_size": 7207685, "num_input_records": 5506, "num_output_records": 4496, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787068571579, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787068575238, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.482811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.575535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.575543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.575546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.575549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:37:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:37:48.575552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:37:48 compute-0 nova_compute[348325]: 2025-12-03 18:37:48.619 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df72d527-943e-4e8c-b62a-63afa5f18261/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz_njrm3c" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:48 compute-0 nova_compute[348325]: 2025-12-03 18:37:48.683 348329 DEBUG nova.storage.rbd_utils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image df72d527-943e-4e8c-b62a-63afa5f18261_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:37:48 compute-0 nova_compute[348325]: 2025-12-03 18:37:48.694 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df72d527-943e-4e8c-b62a-63afa5f18261/disk.config df72d527-943e-4e8c-b62a-63afa5f18261_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:37:49 compute-0 nova_compute[348325]: 2025-12-03 18:37:49.004 348329 DEBUG oslo_concurrency.processutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df72d527-943e-4e8c-b62a-63afa5f18261/disk.config df72d527-943e-4e8c-b62a-63afa5f18261_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:37:49 compute-0 nova_compute[348325]: 2025-12-03 18:37:49.006 348329 INFO nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Deleting local config drive /var/lib/nova/instances/df72d527-943e-4e8c-b62a-63afa5f18261/disk.config because it was imported into RBD.#033[00m
Dec  3 18:37:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Dec  3 18:37:49 compute-0 kernel: tap03bf6208-f4: entered promiscuous mode
Dec  3 18:37:49 compute-0 NetworkManager[49087]: <info>  [1764787069.1000] manager: (tap03bf6208-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec  3 18:37:49 compute-0 ovn_controller[89305]: 2025-12-03T18:37:49Z|00035|binding|INFO|Claiming lport 03bf6208-f40b-4534-a297-122588172fa5 for this chassis.
Dec  3 18:37:49 compute-0 ovn_controller[89305]: 2025-12-03T18:37:49Z|00036|binding|INFO|03bf6208-f40b-4534-a297-122588172fa5: Claiming fa:16:3e:41:ba:29 192.168.0.170
Dec  3 18:37:49 compute-0 nova_compute[348325]: 2025-12-03 18:37:49.102 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:49 compute-0 ovn_controller[89305]: 2025-12-03T18:37:49Z|00037|binding|INFO|Setting lport 03bf6208-f40b-4534-a297-122588172fa5 ovn-installed in OVS
Dec  3 18:37:49 compute-0 nova_compute[348325]: 2025-12-03 18:37:49.132 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:49 compute-0 systemd-udevd[413795]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:37:49 compute-0 systemd-machined[138702]: New machine qemu-2-instance-00000002.
Dec  3 18:37:49 compute-0 ovn_controller[89305]: 2025-12-03T18:37:49Z|00038|binding|INFO|Setting lport 03bf6208-f40b-4534-a297-122588172fa5 up in Southbound
Dec  3 18:37:49 compute-0 NetworkManager[49087]: <info>  [1764787069.1650] device (tap03bf6208-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.163 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:ba:29 192.168.0.170'], port_security=['fa:16:3e:41:ba:29 192.168.0.170'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-4jfpq66btob3-hjy2dfx75wfw-5fmurbrh4hte-port-kiigdzr3s4cr', 'neutron:cidrs': '192.168.0.170/24', 'neutron:device_id': 'df72d527-943e-4e8c-b62a-63afa5f18261', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-4jfpq66btob3-hjy2dfx75wfw-5fmurbrh4hte-port-kiigdzr3s4cr', 'neutron:project_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e48052e-a2fd-4fc1-8ebd-22e3b6e0bd66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12999ead-9a54-49b3-a532-a5f8bdddaf16, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=03bf6208-f40b-4534-a297-122588172fa5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.165 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 03bf6208-f40b-4534-a297-122588172fa5 in datapath 85c8d446-ad7f-4d1b-a311-89b0b07e8aad bound to our chassis#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.166 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85c8d446-ad7f-4d1b-a311-89b0b07e8aad#033[00m
Dec  3 18:37:49 compute-0 NetworkManager[49087]: <info>  [1764787069.1697] device (tap03bf6208-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:37:49 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.196 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[337e0409-da01-4ba1-8783-220cc881ad6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.243 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[68370686-f3bd-43e0-a994-faf537ba4feb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.249 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[390f616d-84cf-4187-a493-8fdce043a04f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.292 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a68cb8-9983-4c9d-8c3f-1f62564585d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.324 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[dca984ce-eb19-4fa8-af4f-eb95e757a116]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85c8d446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:c1:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527503, 'reachable_time': 33358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413810, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.354 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e02e495e-9518-4048-904c-384b127ad9c3]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527519, 'tstamp': 527519}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413811, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527523, 'tstamp': 527523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413811, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.358 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85c8d446-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:37:49 compute-0 nova_compute[348325]: 2025-12-03 18:37:49.362 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:49 compute-0 nova_compute[348325]: 2025-12-03 18:37:49.363 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.365 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85c8d446-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.365 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.367 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85c8d446-a0, col_values=(('external_ids', {'iface-id': '4db8340d-afa3-4a82-bd51-bca0a752f53f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:37:49 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:49.368 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:37:49 compute-0 nova_compute[348325]: 2025-12-03 18:37:49.626 348329 DEBUG nova.compute.manager [req-89cdb63b-bfc3-4ba7-8853-0c098ae17fe1 req-15deb8d4-02b3-4860-96f0-c8602ada6955 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Received event network-vif-plugged-03bf6208-f40b-4534-a297-122588172fa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:37:49 compute-0 nova_compute[348325]: 2025-12-03 18:37:49.627 348329 DEBUG oslo_concurrency.lockutils [req-89cdb63b-bfc3-4ba7-8853-0c098ae17fe1 req-15deb8d4-02b3-4860-96f0-c8602ada6955 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:37:49 compute-0 nova_compute[348325]: 2025-12-03 18:37:49.628 348329 DEBUG oslo_concurrency.lockutils [req-89cdb63b-bfc3-4ba7-8853-0c098ae17fe1 req-15deb8d4-02b3-4860-96f0-c8602ada6955 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:37:49 compute-0 nova_compute[348325]: 2025-12-03 18:37:49.629 348329 DEBUG oslo_concurrency.lockutils [req-89cdb63b-bfc3-4ba7-8853-0c098ae17fe1 req-15deb8d4-02b3-4860-96f0-c8602ada6955 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:37:49 compute-0 nova_compute[348325]: 2025-12-03 18:37:49.629 348329 DEBUG nova.compute.manager [req-89cdb63b-bfc3-4ba7-8853-0c098ae17fe1 req-15deb8d4-02b3-4860-96f0-c8602ada6955 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Processing event network-vif-plugged-03bf6208-f40b-4534-a297-122588172fa5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 18:37:50 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:50.056 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.057 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:50 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:50.059 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.474 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764787070.473195, df72d527-943e-4e8c-b62a-63afa5f18261 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.474 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] VM Started (Lifecycle Event)#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.479 348329 DEBUG nova.compute.manager [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.485 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.494 348329 INFO nova.virt.libvirt.driver [-] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Instance spawned successfully.#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.494 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.582 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.592 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.593 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.594 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.595 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.596 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.598 348329 DEBUG nova.virt.libvirt.driver [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.611 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.832 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.832 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764787070.4734144, df72d527-943e-4e8c-b62a-63afa5f18261 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.833 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] VM Paused (Lifecycle Event)#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.857 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.866 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764787070.4844432, df72d527-943e-4e8c-b62a-63afa5f18261 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.866 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.873 348329 INFO nova.compute.manager [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Took 10.92 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.874 348329 DEBUG nova.compute.manager [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.885 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.893 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:37:50 compute-0 nova_compute[348325]: 2025-12-03 18:37:50.929 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:37:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec  3 18:37:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:37:51.062 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:37:51 compute-0 nova_compute[348325]: 2025-12-03 18:37:51.154 348329 INFO nova.compute.manager [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Took 12.65 seconds to build instance.#033[00m
Dec  3 18:37:51 compute-0 nova_compute[348325]: 2025-12-03 18:37:51.215 348329 DEBUG oslo_concurrency.lockutils [None req-9df6a4ec-529b-4102-83b5-f396186befd2 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:37:51 compute-0 nova_compute[348325]: 2025-12-03 18:37:51.886 348329 DEBUG nova.compute.manager [req-d68fa329-68fc-4cc1-aec1-7624ebda82e2 req-59b5f215-7f44-4f55-8289-3d9e42c20545 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Received event network-vif-plugged-03bf6208-f40b-4534-a297-122588172fa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:37:51 compute-0 nova_compute[348325]: 2025-12-03 18:37:51.886 348329 DEBUG oslo_concurrency.lockutils [req-d68fa329-68fc-4cc1-aec1-7624ebda82e2 req-59b5f215-7f44-4f55-8289-3d9e42c20545 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:37:51 compute-0 nova_compute[348325]: 2025-12-03 18:37:51.887 348329 DEBUG oslo_concurrency.lockutils [req-d68fa329-68fc-4cc1-aec1-7624ebda82e2 req-59b5f215-7f44-4f55-8289-3d9e42c20545 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:37:51 compute-0 nova_compute[348325]: 2025-12-03 18:37:51.887 348329 DEBUG oslo_concurrency.lockutils [req-d68fa329-68fc-4cc1-aec1-7624ebda82e2 req-59b5f215-7f44-4f55-8289-3d9e42c20545 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:37:51 compute-0 nova_compute[348325]: 2025-12-03 18:37:51.887 348329 DEBUG nova.compute.manager [req-d68fa329-68fc-4cc1-aec1-7624ebda82e2 req-59b5f215-7f44-4f55-8289-3d9e42c20545 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] No waiting events found dispatching network-vif-plugged-03bf6208-f40b-4534-a297-122588172fa5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:37:51 compute-0 nova_compute[348325]: 2025-12-03 18:37:51.887 348329 WARNING nova.compute.manager [req-d68fa329-68fc-4cc1-aec1-7624ebda82e2 req-59b5f215-7f44-4f55-8289-3d9e42c20545 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Received unexpected event network-vif-plugged-03bf6208-f40b-4534-a297-122588172fa5 for instance with vm_state active and task_state None.#033[00m
Dec  3 18:37:52 compute-0 nova_compute[348325]: 2025-12-03 18:37:52.394 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:52 compute-0 nova_compute[348325]: 2025-12-03 18:37:52.742 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 211 KiB/s rd, 1.4 MiB/s wr, 55 op/s
Dec  3 18:37:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:54 compute-0 podman[413873]: 2025-12-03 18:37:54.989170648 +0000 UTC m=+0.135319493 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:37:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 901 KiB/s rd, 617 KiB/s wr, 64 op/s
Dec  3 18:37:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 22 KiB/s wr, 58 op/s
Dec  3 18:37:57 compute-0 nova_compute[348325]: 2025-12-03 18:37:57.398 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:57 compute-0 nova_compute[348325]: 2025-12-03 18:37:57.747 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:37:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:37:58 compute-0 podman[413897]: 2025-12-03 18:37:58.92394647 +0000 UTC m=+0.086758679 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:37:59 compute-0 podman[413896]: 2025-12-03 18:37:59.015803211 +0000 UTC m=+0.176958870 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:37:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec  3 18:37:59 compute-0 podman[158200]: time="2025-12-03T18:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:37:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:37:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8629 "" "Go-http-client/1.1"
Dec  3 18:38:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec  3 18:38:01 compute-0 openstack_network_exporter[365222]: ERROR   18:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:38:01 compute-0 openstack_network_exporter[365222]: ERROR   18:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:38:01 compute-0 openstack_network_exporter[365222]: ERROR   18:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:38:01 compute-0 openstack_network_exporter[365222]: ERROR   18:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:38:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:38:01 compute-0 openstack_network_exporter[365222]: ERROR   18:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:38:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:38:02 compute-0 nova_compute[348325]: 2025-12-03 18:38:02.400 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:02 compute-0 nova_compute[348325]: 2025-12-03 18:38:02.752 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 58 op/s
Dec  3 18:38:03 compute-0 nova_compute[348325]: 2025-12-03 18:38:03.193 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:03 compute-0 nova_compute[348325]: 2025-12-03 18:38:03.194 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:38:03 compute-0 nova_compute[348325]: 2025-12-03 18:38:03.194 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:38:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:03 compute-0 nova_compute[348325]: 2025-12-03 18:38:03.976 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:38:03 compute-0 nova_compute[348325]: 2025-12-03 18:38:03.977 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:38:03 compute-0 nova_compute[348325]: 2025-12-03 18:38:03.977 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:38:03 compute-0 nova_compute[348325]: 2025-12-03 18:38:03.978 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ca1fbdb-089c-4544-821e-0542089b8424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:38:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 43 op/s
Dec  3 18:38:05 compute-0 nova_compute[348325]: 2025-12-03 18:38:05.258 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:38:05 compute-0 nova_compute[348325]: 2025-12-03 18:38:05.276 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:38:05 compute-0 nova_compute[348325]: 2025-12-03 18:38:05.276 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:38:05 compute-0 nova_compute[348325]: 2025-12-03 18:38:05.277 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:05 compute-0 nova_compute[348325]: 2025-12-03 18:38:05.277 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:05 compute-0 nova_compute[348325]: 2025-12-03 18:38:05.278 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:05 compute-0 nova_compute[348325]: 2025-12-03 18:38:05.278 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:05 compute-0 nova_compute[348325]: 2025-12-03 18:38:05.279 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:05 compute-0 nova_compute[348325]: 2025-12-03 18:38:05.279 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:05 compute-0 nova_compute[348325]: 2025-12-03 18:38:05.280 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:38:05 compute-0 nova_compute[348325]: 2025-12-03 18:38:05.564 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:06 compute-0 nova_compute[348325]: 2025-12-03 18:38:06.477 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 633 KiB/s rd, 20 op/s
Dec  3 18:38:07 compute-0 nova_compute[348325]: 2025-12-03 18:38:07.406 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:07 compute-0 nova_compute[348325]: 2025-12-03 18:38:07.755 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 10 op/s
Dec  3 18:38:09 compute-0 nova_compute[348325]: 2025-12-03 18:38:09.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:09 compute-0 nova_compute[348325]: 2025-12-03 18:38:09.544 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:38:09 compute-0 nova_compute[348325]: 2025-12-03 18:38:09.545 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:38:09 compute-0 nova_compute[348325]: 2025-12-03 18:38:09.546 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:38:09 compute-0 nova_compute[348325]: 2025-12-03 18:38:09.546 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:38:09 compute-0 nova_compute[348325]: 2025-12-03 18:38:09.546 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:38:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:38:10 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/218003812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.115 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.246 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.247 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.248 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.257 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.258 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.259 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.732 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.734 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3882MB free_disk=59.93907165527344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.734 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.735 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.810 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.810 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance df72d527-943e-4e8c-b62a-63afa5f18261 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.811 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.811 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:38:10 compute-0 nova_compute[348325]: 2025-12-03 18:38:10.870 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:38:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:38:11 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080882187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:38:11 compute-0 nova_compute[348325]: 2025-12-03 18:38:11.349 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:38:11 compute-0 nova_compute[348325]: 2025-12-03 18:38:11.359 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:38:11 compute-0 nova_compute[348325]: 2025-12-03 18:38:11.391 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:38:11 compute-0 nova_compute[348325]: 2025-12-03 18:38:11.446 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:38:11 compute-0 nova_compute[348325]: 2025-12-03 18:38:11.447 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:38:12 compute-0 nova_compute[348325]: 2025-12-03 18:38:12.413 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:38:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:38:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:38:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:38:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:38:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:38:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 2bec3a80-7a5d-4140-ac4d-40f341b785a0 does not exist
Dec  3 18:38:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6d070206-f96b-4177-9074-209fc5ee68cc does not exist
Dec  3 18:38:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3bcb1eef-748c-479c-9a98-4ae8ff8b8ee7 does not exist
Dec  3 18:38:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:38:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:38:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:38:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:38:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:38:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:38:12 compute-0 nova_compute[348325]: 2025-12-03 18:38:12.758 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:12 compute-0 podman[414141]: 2025-12-03 18:38:12.844371773 +0000 UTC m=+0.094081569 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:38:12 compute-0 podman[414142]: 2025-12-03 18:38:12.868170104 +0000 UTC m=+0.107108776 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=)
Dec  3 18:38:12 compute-0 podman[414140]: 2025-12-03 18:38:12.868300927 +0000 UTC m=+0.124202843 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:38:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:38:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:38:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:13 compute-0 podman[414314]: 2025-12-03 18:38:13.480404027 +0000 UTC m=+0.060586520 container create c6e7a1e31e525bf1d0388d5b69a679151821ad6b37bcc4eaf2a527bc0313a5f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:38:13 compute-0 systemd[1]: Started libpod-conmon-c6e7a1e31e525bf1d0388d5b69a679151821ad6b37bcc4eaf2a527bc0313a5f5.scope.
Dec  3 18:38:13 compute-0 podman[414314]: 2025-12-03 18:38:13.457530849 +0000 UTC m=+0.037713362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:38:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:38:13 compute-0 podman[414314]: 2025-12-03 18:38:13.597521546 +0000 UTC m=+0.177704059 container init c6e7a1e31e525bf1d0388d5b69a679151821ad6b37bcc4eaf2a527bc0313a5f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:38:13 compute-0 podman[414314]: 2025-12-03 18:38:13.609570509 +0000 UTC m=+0.189753002 container start c6e7a1e31e525bf1d0388d5b69a679151821ad6b37bcc4eaf2a527bc0313a5f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:38:13 compute-0 stupefied_ganguly[414330]: 167 167
Dec  3 18:38:13 compute-0 systemd[1]: libpod-c6e7a1e31e525bf1d0388d5b69a679151821ad6b37bcc4eaf2a527bc0313a5f5.scope: Deactivated successfully.
Dec  3 18:38:13 compute-0 podman[414314]: 2025-12-03 18:38:13.619990864 +0000 UTC m=+0.200173357 container attach c6e7a1e31e525bf1d0388d5b69a679151821ad6b37bcc4eaf2a527bc0313a5f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:38:13 compute-0 podman[414314]: 2025-12-03 18:38:13.621165912 +0000 UTC m=+0.201348395 container died c6e7a1e31e525bf1d0388d5b69a679151821ad6b37bcc4eaf2a527bc0313a5f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-82d215bf470422c66c625110e782b3cdf77d021316db1aa591ea331fc5fcfee3-merged.mount: Deactivated successfully.
Dec  3 18:38:13 compute-0 podman[414314]: 2025-12-03 18:38:13.679425965 +0000 UTC m=+0.259608458 container remove c6e7a1e31e525bf1d0388d5b69a679151821ad6b37bcc4eaf2a527bc0313a5f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 18:38:13 compute-0 systemd[1]: libpod-conmon-c6e7a1e31e525bf1d0388d5b69a679151821ad6b37bcc4eaf2a527bc0313a5f5.scope: Deactivated successfully.
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:38:13
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.rgw.root']
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:38:13 compute-0 podman[414353]: 2025-12-03 18:38:13.927779296 +0000 UTC m=+0.074746475 container create eb90743932efedae23efa4eab55033beb54d5f9c362d5cf8321e5a7a2cd90e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:38:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:38:13 compute-0 podman[414353]: 2025-12-03 18:38:13.901925045 +0000 UTC m=+0.048892244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:38:14 compute-0 systemd[1]: Started libpod-conmon-eb90743932efedae23efa4eab55033beb54d5f9c362d5cf8321e5a7a2cd90e5b.scope.
Dec  3 18:38:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22389f913aa2cfa11853354676df117d3c41d142e643fd874938d062bf8b07d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22389f913aa2cfa11853354676df117d3c41d142e643fd874938d062bf8b07d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22389f913aa2cfa11853354676df117d3c41d142e643fd874938d062bf8b07d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22389f913aa2cfa11853354676df117d3c41d142e643fd874938d062bf8b07d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22389f913aa2cfa11853354676df117d3c41d142e643fd874938d062bf8b07d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:14 compute-0 podman[414353]: 2025-12-03 18:38:14.087984747 +0000 UTC m=+0.234952026 container init eb90743932efedae23efa4eab55033beb54d5f9c362d5cf8321e5a7a2cd90e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 18:38:14 compute-0 podman[414353]: 2025-12-03 18:38:14.098418541 +0000 UTC m=+0.245385710 container start eb90743932efedae23efa4eab55033beb54d5f9c362d5cf8321e5a7a2cd90e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:38:14 compute-0 podman[414353]: 2025-12-03 18:38:14.102934902 +0000 UTC m=+0.249902111 container attach eb90743932efedae23efa4eab55033beb54d5f9c362d5cf8321e5a7a2cd90e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:38:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:38:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:38:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:38:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:38:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:38:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:38:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:38:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:38:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:38:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:38:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:15 compute-0 ecstatic_banzai[414369]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:38:15 compute-0 ecstatic_banzai[414369]: --> relative data size: 1.0
Dec  3 18:38:15 compute-0 ecstatic_banzai[414369]: --> All data devices are unavailable
Dec  3 18:38:15 compute-0 systemd[1]: libpod-eb90743932efedae23efa4eab55033beb54d5f9c362d5cf8321e5a7a2cd90e5b.scope: Deactivated successfully.
Dec  3 18:38:15 compute-0 podman[414353]: 2025-12-03 18:38:15.374195301 +0000 UTC m=+1.521162470 container died eb90743932efedae23efa4eab55033beb54d5f9c362d5cf8321e5a7a2cd90e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec  3 18:38:15 compute-0 systemd[1]: libpod-eb90743932efedae23efa4eab55033beb54d5f9c362d5cf8321e5a7a2cd90e5b.scope: Consumed 1.196s CPU time.
Dec  3 18:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-22389f913aa2cfa11853354676df117d3c41d142e643fd874938d062bf8b07d9-merged.mount: Deactivated successfully.
Dec  3 18:38:15 compute-0 podman[414353]: 2025-12-03 18:38:15.46099282 +0000 UTC m=+1.607959989 container remove eb90743932efedae23efa4eab55033beb54d5f9c362d5cf8321e5a7a2cd90e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:38:15 compute-0 systemd[1]: libpod-conmon-eb90743932efedae23efa4eab55033beb54d5f9c362d5cf8321e5a7a2cd90e5b.scope: Deactivated successfully.
Dec  3 18:38:15 compute-0 podman[414399]: 2025-12-03 18:38:15.55278722 +0000 UTC m=+0.141398862 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:38:16 compute-0 podman[414567]: 2025-12-03 18:38:16.456253642 +0000 UTC m=+0.065723576 container create 3ea175f47be3aa7cba5c931bf99e415ce20af2baac4f6713a9fa103e2d473291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:38:16 compute-0 systemd[1]: Started libpod-conmon-3ea175f47be3aa7cba5c931bf99e415ce20af2baac4f6713a9fa103e2d473291.scope.
Dec  3 18:38:16 compute-0 podman[414567]: 2025-12-03 18:38:16.431317044 +0000 UTC m=+0.040786948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:38:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:38:16 compute-0 podman[414567]: 2025-12-03 18:38:16.568679386 +0000 UTC m=+0.178149310 container init 3ea175f47be3aa7cba5c931bf99e415ce20af2baac4f6713a9fa103e2d473291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:38:16 compute-0 podman[414567]: 2025-12-03 18:38:16.578181748 +0000 UTC m=+0.187651652 container start 3ea175f47be3aa7cba5c931bf99e415ce20af2baac4f6713a9fa103e2d473291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:38:16 compute-0 magical_mclaren[414583]: 167 167
Dec  3 18:38:16 compute-0 systemd[1]: libpod-3ea175f47be3aa7cba5c931bf99e415ce20af2baac4f6713a9fa103e2d473291.scope: Deactivated successfully.
Dec  3 18:38:16 compute-0 podman[414567]: 2025-12-03 18:38:16.630678369 +0000 UTC m=+0.240148303 container attach 3ea175f47be3aa7cba5c931bf99e415ce20af2baac4f6713a9fa103e2d473291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:38:16 compute-0 podman[414567]: 2025-12-03 18:38:16.633017587 +0000 UTC m=+0.242487541 container died 3ea175f47be3aa7cba5c931bf99e415ce20af2baac4f6713a9fa103e2d473291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7a532559ef0756828244852cd34c5128bc69f152a25fa66edb5efe10940a3fd-merged.mount: Deactivated successfully.
Dec  3 18:38:16 compute-0 podman[414567]: 2025-12-03 18:38:16.71754906 +0000 UTC m=+0.327018964 container remove 3ea175f47be3aa7cba5c931bf99e415ce20af2baac4f6713a9fa103e2d473291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 18:38:16 compute-0 systemd[1]: libpod-conmon-3ea175f47be3aa7cba5c931bf99e415ce20af2baac4f6713a9fa103e2d473291.scope: Deactivated successfully.
Dec  3 18:38:16 compute-0 podman[414606]: 2025-12-03 18:38:16.964026316 +0000 UTC m=+0.074919940 container create 5456708ea8a313d664e9a99f0c93600f707bf39fd7f782b06dbb272bc8b181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 18:38:17 compute-0 podman[414606]: 2025-12-03 18:38:16.938426671 +0000 UTC m=+0.049320325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:38:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:17 compute-0 systemd[1]: Started libpod-conmon-5456708ea8a313d664e9a99f0c93600f707bf39fd7f782b06dbb272bc8b181d2.scope.
Dec  3 18:38:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c46b8ffffd1fae00934b84c611eec8f639ac94298b49e86f464c78a2facb177/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c46b8ffffd1fae00934b84c611eec8f639ac94298b49e86f464c78a2facb177/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c46b8ffffd1fae00934b84c611eec8f639ac94298b49e86f464c78a2facb177/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c46b8ffffd1fae00934b84c611eec8f639ac94298b49e86f464c78a2facb177/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:17 compute-0 podman[414606]: 2025-12-03 18:38:17.144209444 +0000 UTC m=+0.255103078 container init 5456708ea8a313d664e9a99f0c93600f707bf39fd7f782b06dbb272bc8b181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:38:17 compute-0 podman[414620]: 2025-12-03 18:38:17.145524306 +0000 UTC m=+0.125265609 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.expose-services=, release-0.7.12=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, name=ubi9, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 18:38:17 compute-0 podman[414606]: 2025-12-03 18:38:17.157013906 +0000 UTC m=+0.267907530 container start 5456708ea8a313d664e9a99f0c93600f707bf39fd7f782b06dbb272bc8b181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:38:17 compute-0 podman[414606]: 2025-12-03 18:38:17.161860654 +0000 UTC m=+0.272754298 container attach 5456708ea8a313d664e9a99f0c93600f707bf39fd7f782b06dbb272bc8b181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:38:17 compute-0 podman[414621]: 2025-12-03 18:38:17.172041284 +0000 UTC m=+0.142624533 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  3 18:38:17 compute-0 nova_compute[348325]: 2025-12-03 18:38:17.415 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:17 compute-0 nova_compute[348325]: 2025-12-03 18:38:17.761 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]: {
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:    "0": [
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:        {
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "devices": [
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "/dev/loop3"
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            ],
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_name": "ceph_lv0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_size": "21470642176",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "name": "ceph_lv0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "tags": {
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.cluster_name": "ceph",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.crush_device_class": "",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.encrypted": "0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.osd_id": "0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.type": "block",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.vdo": "0"
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            },
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "type": "block",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "vg_name": "ceph_vg0"
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:        }
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:    ],
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:    "1": [
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:        {
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "devices": [
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "/dev/loop4"
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            ],
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_name": "ceph_lv1",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_size": "21470642176",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "name": "ceph_lv1",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "tags": {
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.cluster_name": "ceph",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.crush_device_class": "",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.encrypted": "0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.osd_id": "1",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.type": "block",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.vdo": "0"
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            },
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "type": "block",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "vg_name": "ceph_vg1"
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:        }
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:    ],
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:    "2": [
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:        {
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "devices": [
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "/dev/loop5"
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            ],
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_name": "ceph_lv2",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_size": "21470642176",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "name": "ceph_lv2",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "tags": {
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.cluster_name": "ceph",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.crush_device_class": "",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.encrypted": "0",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.osd_id": "2",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.type": "block",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:                "ceph.vdo": "0"
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            },
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "type": "block",
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:            "vg_name": "ceph_vg2"
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:        }
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]:    ]
Dec  3 18:38:18 compute-0 pedantic_wilson[414643]: }
Dec  3 18:38:18 compute-0 systemd[1]: libpod-5456708ea8a313d664e9a99f0c93600f707bf39fd7f782b06dbb272bc8b181d2.scope: Deactivated successfully.
Dec  3 18:38:18 compute-0 podman[414606]: 2025-12-03 18:38:18.081720287 +0000 UTC m=+1.192613911 container died 5456708ea8a313d664e9a99f0c93600f707bf39fd7f782b06dbb272bc8b181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c46b8ffffd1fae00934b84c611eec8f639ac94298b49e86f464c78a2facb177-merged.mount: Deactivated successfully.
Dec  3 18:38:18 compute-0 podman[414606]: 2025-12-03 18:38:18.159107806 +0000 UTC m=+1.270001420 container remove 5456708ea8a313d664e9a99f0c93600f707bf39fd7f782b06dbb272bc8b181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:38:18 compute-0 systemd[1]: libpod-conmon-5456708ea8a313d664e9a99f0c93600f707bf39fd7f782b06dbb272bc8b181d2.scope: Deactivated successfully.
Dec  3 18:38:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:19 compute-0 podman[414825]: 2025-12-03 18:38:19.152635226 +0000 UTC m=+0.070392069 container create fef4da402c7b2bfee6b6fc065ba9e96415f90ccdf557900817b1c2dce0815351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bouman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:38:19 compute-0 systemd[1]: Started libpod-conmon-fef4da402c7b2bfee6b6fc065ba9e96415f90ccdf557900817b1c2dce0815351.scope.
Dec  3 18:38:19 compute-0 podman[414825]: 2025-12-03 18:38:19.124082339 +0000 UTC m=+0.041839172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:38:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:38:19 compute-0 podman[414825]: 2025-12-03 18:38:19.269362025 +0000 UTC m=+0.187118898 container init fef4da402c7b2bfee6b6fc065ba9e96415f90ccdf557900817b1c2dce0815351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bouman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:38:19 compute-0 podman[414825]: 2025-12-03 18:38:19.281391919 +0000 UTC m=+0.199148752 container start fef4da402c7b2bfee6b6fc065ba9e96415f90ccdf557900817b1c2dce0815351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:38:19 compute-0 podman[414825]: 2025-12-03 18:38:19.28843292 +0000 UTC m=+0.206189803 container attach fef4da402c7b2bfee6b6fc065ba9e96415f90ccdf557900817b1c2dce0815351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bouman, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:38:19 compute-0 ecstatic_bouman[414841]: 167 167
Dec  3 18:38:19 compute-0 systemd[1]: libpod-fef4da402c7b2bfee6b6fc065ba9e96415f90ccdf557900817b1c2dce0815351.scope: Deactivated successfully.
Dec  3 18:38:19 compute-0 podman[414825]: 2025-12-03 18:38:19.300237079 +0000 UTC m=+0.217993912 container died fef4da402c7b2bfee6b6fc065ba9e96415f90ccdf557900817b1c2dce0815351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bouman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-db063ddab6b90e6d255e2388ce85d42d0fd2be2392d84480af57e86b4aeef499-merged.mount: Deactivated successfully.
Dec  3 18:38:19 compute-0 ovn_controller[89305]: 2025-12-03T18:38:19Z|00039|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Dec  3 18:38:19 compute-0 podman[414825]: 2025-12-03 18:38:19.363028031 +0000 UTC m=+0.280784864 container remove fef4da402c7b2bfee6b6fc065ba9e96415f90ccdf557900817b1c2dce0815351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bouman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:38:19 compute-0 systemd[1]: libpod-conmon-fef4da402c7b2bfee6b6fc065ba9e96415f90ccdf557900817b1c2dce0815351.scope: Deactivated successfully.
Dec  3 18:38:19 compute-0 podman[414866]: 2025-12-03 18:38:19.596623533 +0000 UTC m=+0.079592154 container create a5d85da91b9c6b0edbb377143913ab934eea41a038eb98d15897dd6d566789e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:38:19 compute-0 podman[414866]: 2025-12-03 18:38:19.572549505 +0000 UTC m=+0.055518146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:38:19 compute-0 systemd[1]: Started libpod-conmon-a5d85da91b9c6b0edbb377143913ab934eea41a038eb98d15897dd6d566789e3.scope.
Dec  3 18:38:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def18bffe8ed1a1bf8dad885aae822276f570013b91a18fe10b65a2a4bbd427e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def18bffe8ed1a1bf8dad885aae822276f570013b91a18fe10b65a2a4bbd427e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def18bffe8ed1a1bf8dad885aae822276f570013b91a18fe10b65a2a4bbd427e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def18bffe8ed1a1bf8dad885aae822276f570013b91a18fe10b65a2a4bbd427e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:38:19 compute-0 podman[414866]: 2025-12-03 18:38:19.756322941 +0000 UTC m=+0.239291582 container init a5d85da91b9c6b0edbb377143913ab934eea41a038eb98d15897dd6d566789e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:38:19 compute-0 podman[414866]: 2025-12-03 18:38:19.766006527 +0000 UTC m=+0.248975158 container start a5d85da91b9c6b0edbb377143913ab934eea41a038eb98d15897dd6d566789e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:38:19 compute-0 podman[414866]: 2025-12-03 18:38:19.772971377 +0000 UTC m=+0.255940018 container attach a5d85da91b9c6b0edbb377143913ab934eea41a038eb98d15897dd6d566789e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 18:38:20 compute-0 gifted_haslett[414881]: {
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "osd_id": 1,
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "type": "bluestore"
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:    },
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "osd_id": 2,
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "type": "bluestore"
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:    },
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "osd_id": 0,
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:        "type": "bluestore"
Dec  3 18:38:20 compute-0 gifted_haslett[414881]:    }
Dec  3 18:38:20 compute-0 gifted_haslett[414881]: }
Dec  3 18:38:20 compute-0 systemd[1]: libpod-a5d85da91b9c6b0edbb377143913ab934eea41a038eb98d15897dd6d566789e3.scope: Deactivated successfully.
Dec  3 18:38:20 compute-0 podman[414866]: 2025-12-03 18:38:20.955755017 +0000 UTC m=+1.438723638 container died a5d85da91b9c6b0edbb377143913ab934eea41a038eb98d15897dd6d566789e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:38:20 compute-0 systemd[1]: libpod-a5d85da91b9c6b0edbb377143913ab934eea41a038eb98d15897dd6d566789e3.scope: Consumed 1.183s CPU time.
Dec  3 18:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-def18bffe8ed1a1bf8dad885aae822276f570013b91a18fe10b65a2a4bbd427e-merged.mount: Deactivated successfully.
Dec  3 18:38:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:21 compute-0 podman[414866]: 2025-12-03 18:38:21.050581652 +0000 UTC m=+1.533550273 container remove a5d85da91b9c6b0edbb377143913ab934eea41a038eb98d15897dd6d566789e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:38:21 compute-0 systemd[1]: libpod-conmon-a5d85da91b9c6b0edbb377143913ab934eea41a038eb98d15897dd6d566789e3.scope: Deactivated successfully.
Dec  3 18:38:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:38:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:38:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:38:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:38:21 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 7159db5a-6436-4cd9-8b7d-67240c6933b2 does not exist
Dec  3 18:38:21 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 2eeee447-5680-437e-81ff-979982a917fe does not exist
Dec  3 18:38:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:38:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:38:22 compute-0 nova_compute[348325]: 2025-12-03 18:38:22.417 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:22 compute-0 nova_compute[348325]: 2025-12-03 18:38:22.766 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:38:23.334 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:38:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:38:23.334 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:38:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:38:23.335 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:38:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008203201308849384 of space, bias 1.0, pg target 0.24609603926548154 quantized to 32 (current 32)
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:38:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:38:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:25 compute-0 podman[414976]: 2025-12-03 18:38:25.951015333 +0000 UTC m=+0.107259209 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:38:26 compute-0 ovn_controller[89305]: 2025-12-03T18:38:26Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:ba:29 192.168.0.170
Dec  3 18:38:26 compute-0 ovn_controller[89305]: 2025-12-03T18:38:26Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:ba:29 192.168.0.170
Dec  3 18:38:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 111 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 109 KiB/s wr, 9 op/s
Dec  3 18:38:27 compute-0 nova_compute[348325]: 2025-12-03 18:38:27.420 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:27 compute-0 nova_compute[348325]: 2025-12-03 18:38:27.771 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:27 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  3 18:38:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 122 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 1.2 MiB/s wr, 18 op/s
Dec  3 18:38:29 compute-0 podman[158200]: time="2025-12-03T18:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:38:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:38:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8634 "" "Go-http-client/1.1"
Dec  3 18:38:29 compute-0 podman[414999]: 2025-12-03 18:38:29.982582827 +0000 UTC m=+0.135395326 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 18:38:30 compute-0 podman[414998]: 2025-12-03 18:38:30.031603473 +0000 UTC m=+0.184488954 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:38:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 131 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Dec  3 18:38:31 compute-0 openstack_network_exporter[365222]: ERROR   18:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:38:31 compute-0 openstack_network_exporter[365222]: ERROR   18:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:38:31 compute-0 openstack_network_exporter[365222]: ERROR   18:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:38:31 compute-0 openstack_network_exporter[365222]: ERROR   18:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:38:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:38:31 compute-0 openstack_network_exporter[365222]: ERROR   18:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:38:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:38:32 compute-0 nova_compute[348325]: 2025-12-03 18:38:32.423 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:32 compute-0 nova_compute[348325]: 2025-12-03 18:38:32.774 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  3 18:38:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  3 18:38:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  3 18:38:37 compute-0 nova_compute[348325]: 2025-12-03 18:38:37.428 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:38:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2627754684' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:38:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:38:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2627754684' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:38:37 compute-0 nova_compute[348325]: 2025-12-03 18:38:37.777 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 1.4 MiB/s wr, 48 op/s
Dec  3 18:38:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 327 KiB/s wr, 39 op/s
Dec  3 18:38:42 compute-0 nova_compute[348325]: 2025-12-03 18:38:42.431 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:42 compute-0 nova_compute[348325]: 2025-12-03 18:38:42.781 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 20 KiB/s wr, 16 op/s
Dec  3 18:38:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:38:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:38:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:38:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:38:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:38:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:38:43 compute-0 podman[415044]: 2025-12-03 18:38:43.981410154 +0000 UTC m=+0.119438756 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:38:43 compute-0 podman[415045]: 2025-12-03 18:38:43.989877961 +0000 UTC m=+0.132243929 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 18:38:44 compute-0 podman[415043]: 2025-12-03 18:38:44.009203423 +0000 UTC m=+0.152863312 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec  3 18:38:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  3 18:38:45 compute-0 podman[415104]: 2025-12-03 18:38:45.962354812 +0000 UTC m=+0.119153879 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 18:38:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:47 compute-0 nova_compute[348325]: 2025-12-03 18:38:47.437 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:47 compute-0 nova_compute[348325]: 2025-12-03 18:38:47.784 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:47 compute-0 podman[415123]: 2025-12-03 18:38:47.953395828 +0000 UTC m=+0.111038971 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, container_name=kepler, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 18:38:47 compute-0 podman[415124]: 2025-12-03 18:38:47.964702574 +0000 UTC m=+0.105853534 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 18:38:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:52 compute-0 nova_compute[348325]: 2025-12-03 18:38:52.439 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:52 compute-0 nova_compute[348325]: 2025-12-03 18:38:52.788 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:56 compute-0 nova_compute[348325]: 2025-12-03 18:38:56.489 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:56 compute-0 podman[415163]: 2025-12-03 18:38:56.966152128 +0000 UTC m=+0.115418017 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:38:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:57 compute-0 nova_compute[348325]: 2025-12-03 18:38:57.444 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:57 compute-0 nova_compute[348325]: 2025-12-03 18:38:57.530 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:38:57 compute-0 nova_compute[348325]: 2025-12-03 18:38:57.532 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 18:38:57 compute-0 nova_compute[348325]: 2025-12-03 18:38:57.707 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 18:38:57 compute-0 nova_compute[348325]: 2025-12-03 18:38:57.793 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:38:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:38:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:38:59 compute-0 podman[158200]: time="2025-12-03T18:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:38:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:38:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8632 "" "Go-http-client/1.1"
Dec  3 18:39:01 compute-0 podman[415188]: 2025-12-03 18:39:01.029903605 +0000 UTC m=+0.164164087 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Dec  3 18:39:01 compute-0 podman[415187]: 2025-12-03 18:39:01.04726519 +0000 UTC m=+0.192657074 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:39:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Dec  3 18:39:01 compute-0 openstack_network_exporter[365222]: ERROR   18:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:39:01 compute-0 openstack_network_exporter[365222]: ERROR   18:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:39:01 compute-0 openstack_network_exporter[365222]: ERROR   18:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:39:01 compute-0 openstack_network_exporter[365222]: ERROR   18:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:39:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:39:01 compute-0 openstack_network_exporter[365222]: ERROR   18:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:39:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:39:01 compute-0 nova_compute[348325]: 2025-12-03 18:39:01.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:39:01 compute-0 nova_compute[348325]: 2025-12-03 18:39:01.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:39:01 compute-0 nova_compute[348325]: 2025-12-03 18:39:01.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 18:39:02 compute-0 nova_compute[348325]: 2025-12-03 18:39:02.448 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:02 compute-0 nova_compute[348325]: 2025-12-03 18:39:02.645 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:39:02 compute-0 nova_compute[348325]: 2025-12-03 18:39:02.646 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:39:02 compute-0 nova_compute[348325]: 2025-12-03 18:39:02.806 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  3 18:39:03 compute-0 nova_compute[348325]: 2025-12-03 18:39:03.321 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:39:03 compute-0 nova_compute[348325]: 2025-12-03 18:39:03.322 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:39:03 compute-0 nova_compute[348325]: 2025-12-03 18:39:03.322 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:39:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  3 18:39:05 compute-0 nova_compute[348325]: 2025-12-03 18:39:05.186 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updating instance_info_cache with network_info: [{"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:39:05 compute-0 nova_compute[348325]: 2025-12-03 18:39:05.521 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:39:05 compute-0 nova_compute[348325]: 2025-12-03 18:39:05.523 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:39:05 compute-0 nova_compute[348325]: 2025-12-03 18:39:05.524 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:39:05 compute-0 nova_compute[348325]: 2025-12-03 18:39:05.525 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:39:05 compute-0 nova_compute[348325]: 2025-12-03 18:39:05.526 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:39:05 compute-0 nova_compute[348325]: 2025-12-03 18:39:05.527 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:39:05 compute-0 nova_compute[348325]: 2025-12-03 18:39:05.528 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:39:05 compute-0 nova_compute[348325]: 2025-12-03 18:39:05.529 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:39:06 compute-0 nova_compute[348325]: 2025-12-03 18:39:06.363 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:39:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  3 18:39:07 compute-0 nova_compute[348325]: 2025-12-03 18:39:07.450 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:07 compute-0 nova_compute[348325]: 2025-12-03 18:39:07.810 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 6.8 KiB/s wr, 1 op/s
Dec  3 18:39:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 6.8 KiB/s wr, 1 op/s
Dec  3 18:39:11 compute-0 nova_compute[348325]: 2025-12-03 18:39:11.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:39:11 compute-0 nova_compute[348325]: 2025-12-03 18:39:11.516 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:39:11 compute-0 nova_compute[348325]: 2025-12-03 18:39:11.518 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:39:11 compute-0 nova_compute[348325]: 2025-12-03 18:39:11.519 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:39:11 compute-0 nova_compute[348325]: 2025-12-03 18:39:11.520 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:39:11 compute-0 nova_compute[348325]: 2025-12-03 18:39:11.521 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:39:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:39:11 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3388631309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:39:11 compute-0 nova_compute[348325]: 2025-12-03 18:39:11.991 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.101 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.102 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.102 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.108 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.109 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.109 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.454 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.572 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.574 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3763MB free_disk=59.92203140258789GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.575 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.575 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.790 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.792 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance df72d527-943e-4e8c-b62a-63afa5f18261 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.793 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.794 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.813 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:12 compute-0 nova_compute[348325]: 2025-12-03 18:39:12.956 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.2 KiB/s wr, 0 op/s
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.247 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.248 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.250 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.259 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1ca1fbdb-089c-4544-821e-0542089b8424', 'name': 'test_0', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.261 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance df72d527-943e-4e8c-b62a-63afa5f18261 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 18:39:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:13.263 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/df72d527-943e-4e8c-b62a-63afa5f18261 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 18:39:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:39:13 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991618518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:39:13 compute-0 nova_compute[348325]: 2025-12-03 18:39:13.433 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:39:13 compute-0 nova_compute[348325]: 2025-12-03 18:39:13.442 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:39:13 compute-0 nova_compute[348325]: 2025-12-03 18:39:13.456 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:39:13 compute-0 nova_compute[348325]: 2025-12-03 18:39:13.458 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:39:13 compute-0 nova_compute[348325]: 2025-12-03 18:39:13.459 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:39:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:39:13
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes']
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:39:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.146 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 03 Dec 2025 18:39:13 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-42072f64-07f1-40eb-8cc4-1d0e5be5192d x-openstack-request-id: req-42072f64-07f1-40eb-8cc4-1d0e5be5192d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.146 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "df72d527-943e-4e8c-b62a-63afa5f18261", "name": "vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5", "status": "ACTIVE", "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "user_id": "56338958b09445f5af9aa9e4601a1a8a", "metadata": {"metering.server_group": "b322e118-e1cc-40be-8d8c-553648144092"}, "hostId": "233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878", "image": {"id": "e68cd467-b4e6-45e0-8e55-984fda402294", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e68cd467-b4e6-45e0-8e55-984fda402294"}]}, "flavor": {"id": "6cb250a4-d28c-4125-888b-653b31e29275", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/6cb250a4-d28c-4125-888b-653b31e29275"}]}, "created": "2025-12-03T18:37:35Z", "updated": "2025-12-03T18:37:50Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.170", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:41:ba:29"}, {"version": 4, "addr": "192.168.122.213", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:41:ba:29"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/df72d527-943e-4e8c-b62a-63afa5f18261"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/df72d527-943e-4e8c-b62a-63afa5f18261"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T18:37:50.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.146 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/df72d527-943e-4e8c-b62a-63afa5f18261 used request id req-42072f64-07f1-40eb-8cc4-1d0e5be5192d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.148 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'df72d527-943e-4e8c-b62a-63afa5f18261', 'name': 'vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.149 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.149 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.149 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.149 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T18:39:14.149650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.156 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.163 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for df72d527-943e-4e8c-b62a-63afa5f18261 / tap03bf6208-f4 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.163 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.165 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.166 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.166 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.168 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes volume: 2174 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T18:39:14.167834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.169 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.bytes volume: 4512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.170 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.170 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.171 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.172 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.172 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T18:39:14.172999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.173 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.174 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets volume: 37 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.178 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T18:39:14.180437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.181 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes.delta volume: 662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.182 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.184 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.185 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.186 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.186 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.187 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T18:39:14.186800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.188 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5>]
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.191 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.193 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T18:39:14.193142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.194 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.bytes volume: 4807 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.196 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.198 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T18:39:14.199852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.241 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.242 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.243 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.277 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.279 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.280 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.281 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.283 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.284 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.285 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.285 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T18:39:14.285082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.287 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.290 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.291 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T18:39:14.291990) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:39:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:39:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:39:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:39:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:39:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:39:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:39:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:39:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:39:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.329 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/memory.usage volume: 48.94921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.370 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/memory.usage volume: 49.09765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.374 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.375 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.377 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T18:39:14.376237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.379 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.380 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.381 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.382 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T18:39:14.382190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.383 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.383 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.384 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.384 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.385 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.387 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.388 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T18:39:14.388168) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.468 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.469 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.469 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.572 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.572 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.573 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.576 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T18:39:14.576057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.577 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5>]
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.578 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.579 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 1682579508 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T18:39:14.579245) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.580 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 260360075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.580 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 147233249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.581 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 1698039964 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.581 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 224294548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.582 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 159520694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.583 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.585 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T18:39:14.584697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.586 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.586 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.587 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.587 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.588 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.590 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T18:39:14.590200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.591 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.591 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.592 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.592 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.593 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.594 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.595 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T18:39:14.595303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.596 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.596 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.597 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 41811968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.597 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.598 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.599 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.600 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.600 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.601 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T18:39:14.600500) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.601 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.602 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.604 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 6303799002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T18:39:14.603924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.604 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 23959545 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.605 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.605 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 9934915795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.606 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 29522381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.606 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.608 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.609 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.609 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T18:39:14.609037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.610 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.610 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.610 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.611 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.611 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.613 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.614 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes.delta volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.614 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.615 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T18:39:14.613977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.616 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T18:39:14.617043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.618 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.620 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T18:39:14.619699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.620 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.621 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T18:39:14.622878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.624 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.626 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T18:39:14.625724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.626 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.627 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.629 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/cpu volume: 36670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T18:39:14.628980) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.629 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/cpu volume: 46350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.630 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:39:14.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:39:14 compute-0 podman[415280]: 2025-12-03 18:39:14.797812618 +0000 UTC m=+0.107036487 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:39:14 compute-0 podman[415281]: 2025-12-03 18:39:14.80362198 +0000 UTC m=+0.108488782 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Dec  3 18:39:14 compute-0 podman[415279]: 2025-12-03 18:39:14.828277842 +0000 UTC m=+0.137750247 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec  3 18:39:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Dec  3 18:39:16 compute-0 podman[415338]: 2025-12-03 18:39:16.976267322 +0000 UTC m=+0.145921477 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 18:39:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Dec  3 18:39:17 compute-0 nova_compute[348325]: 2025-12-03 18:39:17.459 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:17 compute-0 nova_compute[348325]: 2025-12-03 18:39:17.819 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:18 compute-0 podman[415358]: 2025-12-03 18:39:18.950269329 +0000 UTC m=+0.099945502 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:39:18 compute-0 podman[415357]: 2025-12-03 18:39:18.962266953 +0000 UTC m=+0.116358244 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-container, release-0.7.12=, architecture=x86_64, name=ubi9, managed_by=edpm_ansible)
Dec  3 18:39:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Dec  3 18:39:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  3 18:39:22 compute-0 nova_compute[348325]: 2025-12-03 18:39:22.459 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:22 compute-0 podman[415564]: 2025-12-03 18:39:22.556859175 +0000 UTC m=+0.123865917 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:39:22 compute-0 podman[415564]: 2025-12-03 18:39:22.673505046 +0000 UTC m=+0.240511768 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:39:22 compute-0 nova_compute[348325]: 2025-12-03 18:39:22.824 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  3 18:39:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:39:23.335 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:39:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:39:23.337 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:39:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:39:23.338 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:39:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:39:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:39:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:39:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011048885483818454 of space, bias 1.0, pg target 0.33146656451455364 quantized to 32 (current 32)
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:39:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:39:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:39:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:39:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:39:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:39:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:39:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:39:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3e4d1b17-c770-442a-834d-694948b4166e does not exist
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 0f9aa10f-109f-476e-aa69-741cc03b6f75 does not exist
Dec  3 18:39:24 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f6e485fb-afe7-41b9-a14b-565f3e614709 does not exist
Dec  3 18:39:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:39:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:39:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:39:24 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:39:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:39:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:39:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  3 18:39:25 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:39:25 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:39:25 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:39:25 compute-0 podman[415990]: 2025-12-03 18:39:25.693362705 +0000 UTC m=+0.115703857 container create 58d2fb4013c5d2b58820a296c55e957c061807af3a4f7ab7ab6506df1022d414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:39:25 compute-0 podman[415990]: 2025-12-03 18:39:25.63586009 +0000 UTC m=+0.058201252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:39:25 compute-0 systemd[1]: Started libpod-conmon-58d2fb4013c5d2b58820a296c55e957c061807af3a4f7ab7ab6506df1022d414.scope.
Dec  3 18:39:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:39:25 compute-0 podman[415990]: 2025-12-03 18:39:25.821700421 +0000 UTC m=+0.244041583 container init 58d2fb4013c5d2b58820a296c55e957c061807af3a4f7ab7ab6506df1022d414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 18:39:25 compute-0 podman[415990]: 2025-12-03 18:39:25.839811764 +0000 UTC m=+0.262152886 container start 58d2fb4013c5d2b58820a296c55e957c061807af3a4f7ab7ab6506df1022d414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 18:39:25 compute-0 podman[415990]: 2025-12-03 18:39:25.845911812 +0000 UTC m=+0.268252934 container attach 58d2fb4013c5d2b58820a296c55e957c061807af3a4f7ab7ab6506df1022d414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:39:25 compute-0 trusting_davinci[416005]: 167 167
Dec  3 18:39:25 compute-0 systemd[1]: libpod-58d2fb4013c5d2b58820a296c55e957c061807af3a4f7ab7ab6506df1022d414.scope: Deactivated successfully.
Dec  3 18:39:25 compute-0 podman[415990]: 2025-12-03 18:39:25.849820238 +0000 UTC m=+0.272161380 container died 58d2fb4013c5d2b58820a296c55e957c061807af3a4f7ab7ab6506df1022d414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:39:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-05fc62b792fa8f083f7eeba4e0c201fdbda5b618f4948c71c10529693b2c2413-merged.mount: Deactivated successfully.
Dec  3 18:39:25 compute-0 podman[415990]: 2025-12-03 18:39:25.916322893 +0000 UTC m=+0.338664015 container remove 58d2fb4013c5d2b58820a296c55e957c061807af3a4f7ab7ab6506df1022d414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:39:25 compute-0 systemd[1]: libpod-conmon-58d2fb4013c5d2b58820a296c55e957c061807af3a4f7ab7ab6506df1022d414.scope: Deactivated successfully.
Dec  3 18:39:26 compute-0 podman[416030]: 2025-12-03 18:39:26.197973924 +0000 UTC m=+0.073694882 container create 77c2820a98d15982e93473d65677e2b22163974a7c35fca8625b75c5eb2c3484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jepsen, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:39:26 compute-0 podman[416030]: 2025-12-03 18:39:26.161329739 +0000 UTC m=+0.037050677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:39:26 compute-0 systemd[1]: Started libpod-conmon-77c2820a98d15982e93473d65677e2b22163974a7c35fca8625b75c5eb2c3484.scope.
Dec  3 18:39:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d343abd74968884fdda9df3d01a01ed25491cad0b4319351c2b16d63384b9965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d343abd74968884fdda9df3d01a01ed25491cad0b4319351c2b16d63384b9965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d343abd74968884fdda9df3d01a01ed25491cad0b4319351c2b16d63384b9965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d343abd74968884fdda9df3d01a01ed25491cad0b4319351c2b16d63384b9965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d343abd74968884fdda9df3d01a01ed25491cad0b4319351c2b16d63384b9965/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:26 compute-0 podman[416030]: 2025-12-03 18:39:26.316348136 +0000 UTC m=+0.192069064 container init 77c2820a98d15982e93473d65677e2b22163974a7c35fca8625b75c5eb2c3484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jepsen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:39:26 compute-0 podman[416030]: 2025-12-03 18:39:26.328571725 +0000 UTC m=+0.204292643 container start 77c2820a98d15982e93473d65677e2b22163974a7c35fca8625b75c5eb2c3484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jepsen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 18:39:26 compute-0 podman[416030]: 2025-12-03 18:39:26.334626332 +0000 UTC m=+0.210347281 container attach 77c2820a98d15982e93473d65677e2b22163974a7c35fca8625b75c5eb2c3484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:39:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:27 compute-0 nova_compute[348325]: 2025-12-03 18:39:27.463 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:27 compute-0 interesting_jepsen[416046]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:39:27 compute-0 interesting_jepsen[416046]: --> relative data size: 1.0
Dec  3 18:39:27 compute-0 interesting_jepsen[416046]: --> All data devices are unavailable
Dec  3 18:39:27 compute-0 systemd[1]: libpod-77c2820a98d15982e93473d65677e2b22163974a7c35fca8625b75c5eb2c3484.scope: Deactivated successfully.
Dec  3 18:39:27 compute-0 systemd[1]: libpod-77c2820a98d15982e93473d65677e2b22163974a7c35fca8625b75c5eb2c3484.scope: Consumed 1.109s CPU time.
Dec  3 18:39:27 compute-0 podman[416030]: 2025-12-03 18:39:27.53706708 +0000 UTC m=+1.412788008 container died 77c2820a98d15982e93473d65677e2b22163974a7c35fca8625b75c5eb2c3484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d343abd74968884fdda9df3d01a01ed25491cad0b4319351c2b16d63384b9965-merged.mount: Deactivated successfully.
Dec  3 18:39:27 compute-0 podman[416030]: 2025-12-03 18:39:27.609507481 +0000 UTC m=+1.485228399 container remove 77c2820a98d15982e93473d65677e2b22163974a7c35fca8625b75c5eb2c3484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jepsen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:39:27 compute-0 systemd[1]: libpod-conmon-77c2820a98d15982e93473d65677e2b22163974a7c35fca8625b75c5eb2c3484.scope: Deactivated successfully.
Dec  3 18:39:27 compute-0 podman[416075]: 2025-12-03 18:39:27.672029928 +0000 UTC m=+0.110303886 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:39:27 compute-0 nova_compute[348325]: 2025-12-03 18:39:27.831 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:28 compute-0 podman[416247]: 2025-12-03 18:39:28.546311888 +0000 UTC m=+0.071330893 container create a3c6929d6550c76bac7356ff6b48f687c2beb7505d0e99172406056f43ba8804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_robinson, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:39:28 compute-0 systemd[1]: Started libpod-conmon-a3c6929d6550c76bac7356ff6b48f687c2beb7505d0e99172406056f43ba8804.scope.
Dec  3 18:39:28 compute-0 podman[416247]: 2025-12-03 18:39:28.514617324 +0000 UTC m=+0.039636379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:39:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:39:28 compute-0 podman[416247]: 2025-12-03 18:39:28.654735687 +0000 UTC m=+0.179754692 container init a3c6929d6550c76bac7356ff6b48f687c2beb7505d0e99172406056f43ba8804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:39:28 compute-0 podman[416247]: 2025-12-03 18:39:28.672711046 +0000 UTC m=+0.197730011 container start a3c6929d6550c76bac7356ff6b48f687c2beb7505d0e99172406056f43ba8804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_robinson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:39:28 compute-0 podman[416247]: 2025-12-03 18:39:28.676430057 +0000 UTC m=+0.201449062 container attach a3c6929d6550c76bac7356ff6b48f687c2beb7505d0e99172406056f43ba8804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_robinson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:39:28 compute-0 quizzical_robinson[416260]: 167 167
Dec  3 18:39:28 compute-0 systemd[1]: libpod-a3c6929d6550c76bac7356ff6b48f687c2beb7505d0e99172406056f43ba8804.scope: Deactivated successfully.
Dec  3 18:39:28 compute-0 podman[416265]: 2025-12-03 18:39:28.756977976 +0000 UTC m=+0.048815994 container died a3c6929d6550c76bac7356ff6b48f687c2beb7505d0e99172406056f43ba8804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_robinson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 18:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f7ca18867b2973354402c3bb11a0b1e61fc51bf72d3231272d0af31809a9bc7-merged.mount: Deactivated successfully.
Dec  3 18:39:28 compute-0 podman[416265]: 2025-12-03 18:39:28.836804186 +0000 UTC m=+0.128642144 container remove a3c6929d6550c76bac7356ff6b48f687c2beb7505d0e99172406056f43ba8804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 18:39:28 compute-0 systemd[1]: libpod-conmon-a3c6929d6550c76bac7356ff6b48f687c2beb7505d0e99172406056f43ba8804.scope: Deactivated successfully.
Dec  3 18:39:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:29 compute-0 podman[416286]: 2025-12-03 18:39:29.112120912 +0000 UTC m=+0.068079374 container create e57cf803111917bb6e113f4eedf2f41bf4d6c1284808b001b07b5c4284acd056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:39:29 compute-0 systemd[1]: Started libpod-conmon-e57cf803111917bb6e113f4eedf2f41bf4d6c1284808b001b07b5c4284acd056.scope.
Dec  3 18:39:29 compute-0 podman[416286]: 2025-12-03 18:39:29.087940221 +0000 UTC m=+0.043898693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:39:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbc06f0430d9007ca90403782e2cb29d2d18b3bc4089943b29a8df0e644ac4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbc06f0430d9007ca90403782e2cb29d2d18b3bc4089943b29a8df0e644ac4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbc06f0430d9007ca90403782e2cb29d2d18b3bc4089943b29a8df0e644ac4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbc06f0430d9007ca90403782e2cb29d2d18b3bc4089943b29a8df0e644ac4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:29 compute-0 podman[416286]: 2025-12-03 18:39:29.24139329 +0000 UTC m=+0.197351772 container init e57cf803111917bb6e113f4eedf2f41bf4d6c1284808b001b07b5c4284acd056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 18:39:29 compute-0 podman[416286]: 2025-12-03 18:39:29.25281934 +0000 UTC m=+0.208777812 container start e57cf803111917bb6e113f4eedf2f41bf4d6c1284808b001b07b5c4284acd056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:39:29 compute-0 podman[416286]: 2025-12-03 18:39:29.257650798 +0000 UTC m=+0.213609280 container attach e57cf803111917bb6e113f4eedf2f41bf4d6c1284808b001b07b5c4284acd056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 18:39:29 compute-0 podman[158200]: time="2025-12-03T18:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:39:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45382 "" "Go-http-client/1.1"
Dec  3 18:39:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9042 "" "Go-http-client/1.1"
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]: {
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:    "0": [
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:        {
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "devices": [
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "/dev/loop3"
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            ],
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_name": "ceph_lv0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_size": "21470642176",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "name": "ceph_lv0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "tags": {
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.cluster_name": "ceph",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.crush_device_class": "",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.encrypted": "0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.osd_id": "0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.type": "block",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.vdo": "0"
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            },
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "type": "block",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "vg_name": "ceph_vg0"
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:        }
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:    ],
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:    "1": [
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:        {
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "devices": [
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "/dev/loop4"
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            ],
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_name": "ceph_lv1",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_size": "21470642176",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "name": "ceph_lv1",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "tags": {
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.cluster_name": "ceph",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.crush_device_class": "",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.encrypted": "0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.osd_id": "1",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.type": "block",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.vdo": "0"
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            },
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "type": "block",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "vg_name": "ceph_vg1"
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:        }
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:    ],
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:    "2": [
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:        {
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "devices": [
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "/dev/loop5"
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            ],
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_name": "ceph_lv2",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_size": "21470642176",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "name": "ceph_lv2",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "tags": {
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.cluster_name": "ceph",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.crush_device_class": "",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.encrypted": "0",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.osd_id": "2",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.type": "block",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:                "ceph.vdo": "0"
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            },
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "type": "block",
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:            "vg_name": "ceph_vg2"
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:        }
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]:    ]
Dec  3 18:39:30 compute-0 thirsty_rosalind[416304]: }
Dec  3 18:39:30 compute-0 systemd[1]: libpod-e57cf803111917bb6e113f4eedf2f41bf4d6c1284808b001b07b5c4284acd056.scope: Deactivated successfully.
Dec  3 18:39:30 compute-0 podman[416286]: 2025-12-03 18:39:30.146416592 +0000 UTC m=+1.102375054 container died e57cf803111917bb6e113f4eedf2f41bf4d6c1284808b001b07b5c4284acd056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcbc06f0430d9007ca90403782e2cb29d2d18b3bc4089943b29a8df0e644ac4e-merged.mount: Deactivated successfully.
Dec  3 18:39:30 compute-0 podman[416286]: 2025-12-03 18:39:30.234648208 +0000 UTC m=+1.190606670 container remove e57cf803111917bb6e113f4eedf2f41bf4d6c1284808b001b07b5c4284acd056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:39:30 compute-0 systemd[1]: libpod-conmon-e57cf803111917bb6e113f4eedf2f41bf4d6c1284808b001b07b5c4284acd056.scope: Deactivated successfully.
Dec  3 18:39:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:31 compute-0 podman[416461]: 2025-12-03 18:39:31.21792538 +0000 UTC m=+0.068125234 container create 24913589380bc466bc8a6539092473c3fc000a672669006daee84c70ad4beedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:39:31 compute-0 podman[416461]: 2025-12-03 18:39:31.190821279 +0000 UTC m=+0.041021133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:39:31 compute-0 systemd[1]: Started libpod-conmon-24913589380bc466bc8a6539092473c3fc000a672669006daee84c70ad4beedf.scope.
Dec  3 18:39:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:39:31 compute-0 podman[416461]: 2025-12-03 18:39:31.362928244 +0000 UTC m=+0.213128128 container init 24913589380bc466bc8a6539092473c3fc000a672669006daee84c70ad4beedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:39:31 compute-0 podman[416461]: 2025-12-03 18:39:31.376546796 +0000 UTC m=+0.226746650 container start 24913589380bc466bc8a6539092473c3fc000a672669006daee84c70ad4beedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:39:31 compute-0 podman[416461]: 2025-12-03 18:39:31.382587464 +0000 UTC m=+0.232787348 container attach 24913589380bc466bc8a6539092473c3fc000a672669006daee84c70ad4beedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:39:31 compute-0 systemd[1]: libpod-24913589380bc466bc8a6539092473c3fc000a672669006daee84c70ad4beedf.scope: Deactivated successfully.
Dec  3 18:39:31 compute-0 xenodochial_hamilton[416480]: 167 167
Dec  3 18:39:31 compute-0 conmon[416480]: conmon 24913589380bc466bc8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24913589380bc466bc8a6539092473c3fc000a672669006daee84c70ad4beedf.scope/container/memory.events
Dec  3 18:39:31 compute-0 podman[416461]: 2025-12-03 18:39:31.386821467 +0000 UTC m=+0.237021351 container died 24913589380bc466bc8a6539092473c3fc000a672669006daee84c70ad4beedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:39:31 compute-0 podman[416479]: 2025-12-03 18:39:31.40371869 +0000 UTC m=+0.111634549 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  3 18:39:31 compute-0 openstack_network_exporter[365222]: ERROR   18:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:39:31 compute-0 openstack_network_exporter[365222]: ERROR   18:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:39:31 compute-0 openstack_network_exporter[365222]: ERROR   18:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:39:31 compute-0 openstack_network_exporter[365222]: ERROR   18:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:39:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:39:31 compute-0 openstack_network_exporter[365222]: ERROR   18:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:39:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:39:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffa7d204df57e36bf606847d21a65c9d7db7199fac2bc4b3de5fe625ad002b33-merged.mount: Deactivated successfully.
Dec  3 18:39:31 compute-0 podman[416475]: 2025-12-03 18:39:31.451416155 +0000 UTC m=+0.154782913 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:39:31 compute-0 podman[416461]: 2025-12-03 18:39:31.460695762 +0000 UTC m=+0.310895616 container remove 24913589380bc466bc8a6539092473c3fc000a672669006daee84c70ad4beedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 18:39:31 compute-0 systemd[1]: libpod-conmon-24913589380bc466bc8a6539092473c3fc000a672669006daee84c70ad4beedf.scope: Deactivated successfully.
Dec  3 18:39:31 compute-0 podman[416543]: 2025-12-03 18:39:31.703205787 +0000 UTC m=+0.093056415 container create 319b287e3a512788b815e6e5022cd89e1a431b8ba626245a0c8549c2a5c30775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:39:31 compute-0 podman[416543]: 2025-12-03 18:39:31.665245 +0000 UTC m=+0.055095698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:39:31 compute-0 systemd[1]: Started libpod-conmon-319b287e3a512788b815e6e5022cd89e1a431b8ba626245a0c8549c2a5c30775.scope.
Dec  3 18:39:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298faeb7143f9706d040f291e024c56dd31f4e19c72838ea480890800c50bea8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298faeb7143f9706d040f291e024c56dd31f4e19c72838ea480890800c50bea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298faeb7143f9706d040f291e024c56dd31f4e19c72838ea480890800c50bea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298faeb7143f9706d040f291e024c56dd31f4e19c72838ea480890800c50bea8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:39:31 compute-0 podman[416543]: 2025-12-03 18:39:31.876563073 +0000 UTC m=+0.266413761 container init 319b287e3a512788b815e6e5022cd89e1a431b8ba626245a0c8549c2a5c30775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:39:31 compute-0 podman[416543]: 2025-12-03 18:39:31.905386976 +0000 UTC m=+0.295237584 container start 319b287e3a512788b815e6e5022cd89e1a431b8ba626245a0c8549c2a5c30775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:39:31 compute-0 podman[416543]: 2025-12-03 18:39:31.911162707 +0000 UTC m=+0.301013345 container attach 319b287e3a512788b815e6e5022cd89e1a431b8ba626245a0c8549c2a5c30775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 18:39:32 compute-0 nova_compute[348325]: 2025-12-03 18:39:32.464 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:32 compute-0 nova_compute[348325]: 2025-12-03 18:39:32.834 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]: {
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "osd_id": 1,
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "type": "bluestore"
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:    },
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "osd_id": 2,
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "type": "bluestore"
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:    },
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "osd_id": 0,
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:        "type": "bluestore"
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]:    }
Dec  3 18:39:32 compute-0 pedantic_bardeen[416560]: }
Dec  3 18:39:33 compute-0 systemd[1]: libpod-319b287e3a512788b815e6e5022cd89e1a431b8ba626245a0c8549c2a5c30775.scope: Deactivated successfully.
Dec  3 18:39:33 compute-0 systemd[1]: libpod-319b287e3a512788b815e6e5022cd89e1a431b8ba626245a0c8549c2a5c30775.scope: Consumed 1.098s CPU time.
Dec  3 18:39:33 compute-0 podman[416543]: 2025-12-03 18:39:33.009735318 +0000 UTC m=+1.399585916 container died 319b287e3a512788b815e6e5022cd89e1a431b8ba626245a0c8549c2a5c30775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:39:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-298faeb7143f9706d040f291e024c56dd31f4e19c72838ea480890800c50bea8-merged.mount: Deactivated successfully.
Dec  3 18:39:33 compute-0 podman[416543]: 2025-12-03 18:39:33.078921518 +0000 UTC m=+1.468772106 container remove 319b287e3a512788b815e6e5022cd89e1a431b8ba626245a0c8549c2a5c30775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:39:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:33 compute-0 systemd[1]: libpod-conmon-319b287e3a512788b815e6e5022cd89e1a431b8ba626245a0c8549c2a5c30775.scope: Deactivated successfully.
Dec  3 18:39:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:39:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:39:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:39:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:39:33 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 91185f02-a380-4c35-937c-e8d244c2d769 does not exist
Dec  3 18:39:33 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4ab54373-05f7-4b34-8e2a-ccb00d28adc0 does not exist
Dec  3 18:39:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:34 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:39:34 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:39:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:37 compute-0 nova_compute[348325]: 2025-12-03 18:39:37.467 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:39:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3288562281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:39:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:39:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3288562281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:39:37 compute-0 nova_compute[348325]: 2025-12-03 18:39:37.836 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:42 compute-0 nova_compute[348325]: 2025-12-03 18:39:42.468 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:42 compute-0 nova_compute[348325]: 2025-12-03 18:39:42.839 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:39:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:39:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:39:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:39:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:39:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:39:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:45 compute-0 podman[416657]: 2025-12-03 18:39:45.948957896 +0000 UTC m=+0.099926952 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 18:39:45 compute-0 podman[416655]: 2025-12-03 18:39:45.953838286 +0000 UTC m=+0.102182318 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec  3 18:39:45 compute-0 podman[416656]: 2025-12-03 18:39:45.978246572 +0000 UTC m=+0.129689119 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:39:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:47 compute-0 nova_compute[348325]: 2025-12-03 18:39:47.472 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:47 compute-0 nova_compute[348325]: 2025-12-03 18:39:47.840 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:47 compute-0 podman[416716]: 2025-12-03 18:39:47.926978063 +0000 UTC m=+0.077626027 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 18:39:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:39:49 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 18:39:49 compute-0 podman[416737]: 2025-12-03 18:39:49.227887316 +0000 UTC m=+0.075305941 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  3 18:39:49 compute-0 podman[416736]: 2025-12-03 18:39:49.233979785 +0000 UTC m=+0.087175951 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, release-0.7.12=, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=ubi9)
Dec  3 18:39:50 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 18:39:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 18:39:52 compute-0 nova_compute[348325]: 2025-12-03 18:39:52.475 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:52 compute-0 nova_compute[348325]: 2025-12-03 18:39:52.843 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 18:39:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 18:39:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 18:39:57 compute-0 nova_compute[348325]: 2025-12-03 18:39:57.477 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:57 compute-0 nova_compute[348325]: 2025-12-03 18:39:57.846 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:39:57 compute-0 podman[416775]: 2025-12-03 18:39:57.898521046 +0000 UTC m=+0.064130568 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:39:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:39:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 18:39:59 compute-0 podman[158200]: time="2025-12-03T18:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:39:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:39:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8631 "" "Go-http-client/1.1"
Dec  3 18:40:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 18:40:01 compute-0 openstack_network_exporter[365222]: ERROR   18:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:40:01 compute-0 openstack_network_exporter[365222]: ERROR   18:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:40:01 compute-0 openstack_network_exporter[365222]: ERROR   18:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:40:01 compute-0 openstack_network_exporter[365222]: ERROR   18:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:40:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:40:01 compute-0 openstack_network_exporter[365222]: ERROR   18:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:40:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:40:01 compute-0 podman[416798]: 2025-12-03 18:40:01.927896031 +0000 UTC m=+0.092734568 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:40:01 compute-0 podman[416797]: 2025-12-03 18:40:01.960262611 +0000 UTC m=+0.125160479 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.320528) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787202320654, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1290, "num_deletes": 251, "total_data_size": 2040293, "memory_usage": 2070256, "flush_reason": "Manual Compaction"}
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787202335157, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1999574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24262, "largest_seqno": 25551, "table_properties": {"data_size": 1993349, "index_size": 3492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12811, "raw_average_key_size": 19, "raw_value_size": 1981026, "raw_average_value_size": 3066, "num_data_blocks": 157, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764787069, "oldest_key_time": 1764787069, "file_creation_time": 1764787202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 14700 microseconds, and 6922 cpu microseconds.
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.335238) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1999574 bytes OK
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.335257) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.337044) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.337059) EVENT_LOG_v1 {"time_micros": 1764787202337054, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.337077) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2034492, prev total WAL file size 2034492, number of live WAL files 2.
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.338054) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1952KB)], [56(7038KB)]
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787202338110, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9207259, "oldest_snapshot_seqno": -1}
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4628 keys, 7469240 bytes, temperature: kUnknown
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787202384030, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7469240, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7438090, "index_size": 18496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115975, "raw_average_key_size": 25, "raw_value_size": 7353992, "raw_average_value_size": 1589, "num_data_blocks": 767, "num_entries": 4628, "num_filter_entries": 4628, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764787202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.384228) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7469240 bytes
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.385880) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.2 rd, 162.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 6.9 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.3) write-amplify(3.7) OK, records in: 5142, records dropped: 514 output_compression: NoCompression
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.385894) EVENT_LOG_v1 {"time_micros": 1764787202385887, "job": 30, "event": "compaction_finished", "compaction_time_micros": 45984, "compaction_time_cpu_micros": 20218, "output_level": 6, "num_output_files": 1, "total_output_size": 7469240, "num_input_records": 5142, "num_output_records": 4628, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787202386276, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787202387408, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.337834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.387632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.387638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.387640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.387641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:40:02 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:40:02.387643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:40:02 compute-0 nova_compute[348325]: 2025-12-03 18:40:02.479 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:02 compute-0 nova_compute[348325]: 2025-12-03 18:40:02.848 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:05 compute-0 nova_compute[348325]: 2025-12-03 18:40:05.459 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:40:05 compute-0 nova_compute[348325]: 2025-12-03 18:40:05.459 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:40:05 compute-0 nova_compute[348325]: 2025-12-03 18:40:05.460 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:40:05 compute-0 nova_compute[348325]: 2025-12-03 18:40:05.460 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:40:06 compute-0 nova_compute[348325]: 2025-12-03 18:40:06.006 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:40:06 compute-0 nova_compute[348325]: 2025-12-03 18:40:06.007 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:40:06 compute-0 nova_compute[348325]: 2025-12-03 18:40:06.008 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:40:06 compute-0 nova_compute[348325]: 2025-12-03 18:40:06.008 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ca1fbdb-089c-4544-821e-0542089b8424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:40:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:07 compute-0 nova_compute[348325]: 2025-12-03 18:40:07.481 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:07 compute-0 nova_compute[348325]: 2025-12-03 18:40:07.852 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:09 compute-0 nova_compute[348325]: 2025-12-03 18:40:09.373 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:40:09 compute-0 nova_compute[348325]: 2025-12-03 18:40:09.387 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:40:09 compute-0 nova_compute[348325]: 2025-12-03 18:40:09.387 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:40:09 compute-0 nova_compute[348325]: 2025-12-03 18:40:09.388 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:40:09 compute-0 nova_compute[348325]: 2025-12-03 18:40:09.388 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:40:09 compute-0 nova_compute[348325]: 2025-12-03 18:40:09.388 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:40:09 compute-0 nova_compute[348325]: 2025-12-03 18:40:09.389 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:40:09 compute-0 nova_compute[348325]: 2025-12-03 18:40:09.389 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:40:09 compute-0 nova_compute[348325]: 2025-12-03 18:40:09.389 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:40:09 compute-0 nova_compute[348325]: 2025-12-03 18:40:09.390 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:40:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:11 compute-0 nova_compute[348325]: 2025-12-03 18:40:11.408 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:40:12 compute-0 nova_compute[348325]: 2025-12-03 18:40:12.485 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:12 compute-0 nova_compute[348325]: 2025-12-03 18:40:12.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:40:12 compute-0 nova_compute[348325]: 2025-12-03 18:40:12.536 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:40:12 compute-0 nova_compute[348325]: 2025-12-03 18:40:12.537 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:40:12 compute-0 nova_compute[348325]: 2025-12-03 18:40:12.537 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:40:12 compute-0 nova_compute[348325]: 2025-12-03 18:40:12.537 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:40:12 compute-0 nova_compute[348325]: 2025-12-03 18:40:12.538 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:40:12 compute-0 nova_compute[348325]: 2025-12-03 18:40:12.855 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:40:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2212401157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:40:12 compute-0 nova_compute[348325]: 2025-12-03 18:40:12.994 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.088 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.089 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.089 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.095 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.095 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.096 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.429 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.430 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3776MB free_disk=59.922000885009766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.430 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.431 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:40:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.514 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.514 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance df72d527-943e-4e8c-b62a-63afa5f18261 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.515 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.515 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:40:13 compute-0 nova_compute[348325]: 2025-12-03 18:40:13.568 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:40:13
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.mgr', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes']
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:40:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:40:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:40:14 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2393579942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:40:14 compute-0 nova_compute[348325]: 2025-12-03 18:40:14.048 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:40:14 compute-0 nova_compute[348325]: 2025-12-03 18:40:14.057 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:40:14 compute-0 nova_compute[348325]: 2025-12-03 18:40:14.074 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:40:14 compute-0 nova_compute[348325]: 2025-12-03 18:40:14.076 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:40:14 compute-0 nova_compute[348325]: 2025-12-03 18:40:14.076 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:40:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:40:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:40:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:40:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:40:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:40:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:40:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:40:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:40:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:40:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:40:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:16 compute-0 podman[416883]: 2025-12-03 18:40:16.937986754 +0000 UTC m=+0.092070841 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec  3 18:40:16 compute-0 podman[416882]: 2025-12-03 18:40:16.9619714 +0000 UTC m=+0.108617384 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:40:16 compute-0 podman[416881]: 2025-12-03 18:40:16.97918396 +0000 UTC m=+0.130612862 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Dec  3 18:40:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:17 compute-0 nova_compute[348325]: 2025-12-03 18:40:17.487 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:17 compute-0 nova_compute[348325]: 2025-12-03 18:40:17.859 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:18 compute-0 podman[416941]: 2025-12-03 18:40:18.956727776 +0000 UTC m=+0.122594287 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 18:40:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:19 compute-0 podman[416960]: 2025-12-03 18:40:19.90077785 +0000 UTC m=+0.066896915 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, version=9.4, distribution-scope=public, release-0.7.12=, architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Dec  3 18:40:19 compute-0 podman[416961]: 2025-12-03 18:40:19.93759652 +0000 UTC m=+0.098318423 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 18:40:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:22 compute-0 nova_compute[348325]: 2025-12-03 18:40:22.489 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:22 compute-0 nova_compute[348325]: 2025-12-03 18:40:22.862 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:40:23.336 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:40:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:40:23.337 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:40:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:40:23.338 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:40:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011048885483818454 of space, bias 1.0, pg target 0.33146656451455364 quantized to 32 (current 32)
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:40:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:40:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:27 compute-0 nova_compute[348325]: 2025-12-03 18:40:27.492 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:27 compute-0 nova_compute[348325]: 2025-12-03 18:40:27.864 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:28 compute-0 podman[416995]: 2025-12-03 18:40:28.942762252 +0000 UTC m=+0.096204621 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:40:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:29 compute-0 podman[158200]: time="2025-12-03T18:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:40:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:40:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8634 "" "Go-http-client/1.1"
Dec  3 18:40:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:31 compute-0 openstack_network_exporter[365222]: ERROR   18:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:40:31 compute-0 openstack_network_exporter[365222]: ERROR   18:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:40:31 compute-0 openstack_network_exporter[365222]: ERROR   18:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:40:31 compute-0 openstack_network_exporter[365222]: ERROR   18:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:40:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:40:31 compute-0 openstack_network_exporter[365222]: ERROR   18:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:40:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:40:32 compute-0 nova_compute[348325]: 2025-12-03 18:40:32.494 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:32 compute-0 nova_compute[348325]: 2025-12-03 18:40:32.867 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:32 compute-0 podman[417019]: 2025-12-03 18:40:32.920468645 +0000 UTC m=+0.082366373 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:40:32 compute-0 podman[417018]: 2025-12-03 18:40:32.964970813 +0000 UTC m=+0.134120968 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 18:40:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:40:34 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:40:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:40:34 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:40:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:40:34 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:40:34 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev fb407a96-844b-4531-8bf8-a1860942cc32 does not exist
Dec  3 18:40:34 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9cfcde82-e5ea-4536-a6c4-8d917b444151 does not exist
Dec  3 18:40:34 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 37991a6d-21b8-4d86-8f40-a01bdb9a53e7 does not exist
Dec  3 18:40:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:40:34 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:40:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:40:34 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:40:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:40:34 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:40:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:35 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:40:35 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:40:35 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:40:35 compute-0 podman[417336]: 2025-12-03 18:40:35.375555857 +0000 UTC m=+0.047006289 container create 60705f192b38ed49a2b9614ae546fec4bfc9170b517e44fb5e0da12cf5676607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_murdock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:40:35 compute-0 systemd[1]: Started libpod-conmon-60705f192b38ed49a2b9614ae546fec4bfc9170b517e44fb5e0da12cf5676607.scope.
Dec  3 18:40:35 compute-0 podman[417336]: 2025-12-03 18:40:35.35806538 +0000 UTC m=+0.029515832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:40:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:40:35 compute-0 podman[417336]: 2025-12-03 18:40:35.480989693 +0000 UTC m=+0.152440135 container init 60705f192b38ed49a2b9614ae546fec4bfc9170b517e44fb5e0da12cf5676607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_murdock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:40:35 compute-0 podman[417336]: 2025-12-03 18:40:35.489006159 +0000 UTC m=+0.160456591 container start 60705f192b38ed49a2b9614ae546fec4bfc9170b517e44fb5e0da12cf5676607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_murdock, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:40:35 compute-0 podman[417336]: 2025-12-03 18:40:35.493479919 +0000 UTC m=+0.164930371 container attach 60705f192b38ed49a2b9614ae546fec4bfc9170b517e44fb5e0da12cf5676607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_murdock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 18:40:35 compute-0 epic_murdock[417352]: 167 167
Dec  3 18:40:35 compute-0 systemd[1]: libpod-60705f192b38ed49a2b9614ae546fec4bfc9170b517e44fb5e0da12cf5676607.scope: Deactivated successfully.
Dec  3 18:40:35 compute-0 conmon[417352]: conmon 60705f192b38ed49a2b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60705f192b38ed49a2b9614ae546fec4bfc9170b517e44fb5e0da12cf5676607.scope/container/memory.events
Dec  3 18:40:35 compute-0 podman[417336]: 2025-12-03 18:40:35.496947083 +0000 UTC m=+0.168397515 container died 60705f192b38ed49a2b9614ae546fec4bfc9170b517e44fb5e0da12cf5676607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 18:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a5e8da02851e196eb600540a65ffbf9874e6729b6d5a90f4659541a00921283-merged.mount: Deactivated successfully.
Dec  3 18:40:35 compute-0 podman[417336]: 2025-12-03 18:40:35.549415865 +0000 UTC m=+0.220866297 container remove 60705f192b38ed49a2b9614ae546fec4bfc9170b517e44fb5e0da12cf5676607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:40:35 compute-0 systemd[1]: libpod-conmon-60705f192b38ed49a2b9614ae546fec4bfc9170b517e44fb5e0da12cf5676607.scope: Deactivated successfully.
Dec  3 18:40:36 compute-0 podman[417375]: 2025-12-03 18:40:35.748818937 +0000 UTC m=+0.050728040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:40:36 compute-0 podman[417375]: 2025-12-03 18:40:36.018906116 +0000 UTC m=+0.320815239 container create 7a0ad40d0b5e47c22d67384c8a95a89cdbad08fb6257dda56645fdab9928fb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:40:36 compute-0 systemd[1]: Started libpod-conmon-7a0ad40d0b5e47c22d67384c8a95a89cdbad08fb6257dda56645fdab9928fb8c.scope.
Dec  3 18:40:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936a95e84412947bdaeb742e557d08678db4f46b617fdb81199ad7e5125c594c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936a95e84412947bdaeb742e557d08678db4f46b617fdb81199ad7e5125c594c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936a95e84412947bdaeb742e557d08678db4f46b617fdb81199ad7e5125c594c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936a95e84412947bdaeb742e557d08678db4f46b617fdb81199ad7e5125c594c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936a95e84412947bdaeb742e557d08678db4f46b617fdb81199ad7e5125c594c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:36 compute-0 podman[417375]: 2025-12-03 18:40:36.17054488 +0000 UTC m=+0.472453983 container init 7a0ad40d0b5e47c22d67384c8a95a89cdbad08fb6257dda56645fdab9928fb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:40:36 compute-0 podman[417375]: 2025-12-03 18:40:36.180562705 +0000 UTC m=+0.482471788 container start 7a0ad40d0b5e47c22d67384c8a95a89cdbad08fb6257dda56645fdab9928fb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:40:36 compute-0 podman[417375]: 2025-12-03 18:40:36.184592963 +0000 UTC m=+0.486502076 container attach 7a0ad40d0b5e47c22d67384c8a95a89cdbad08fb6257dda56645fdab9928fb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:40:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:37 compute-0 inspiring_kirch[417390]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:40:37 compute-0 inspiring_kirch[417390]: --> relative data size: 1.0
Dec  3 18:40:37 compute-0 inspiring_kirch[417390]: --> All data devices are unavailable
Dec  3 18:40:37 compute-0 systemd[1]: libpod-7a0ad40d0b5e47c22d67384c8a95a89cdbad08fb6257dda56645fdab9928fb8c.scope: Deactivated successfully.
Dec  3 18:40:37 compute-0 systemd[1]: libpod-7a0ad40d0b5e47c22d67384c8a95a89cdbad08fb6257dda56645fdab9928fb8c.scope: Consumed 1.114s CPU time.
Dec  3 18:40:37 compute-0 podman[417419]: 2025-12-03 18:40:37.398389258 +0000 UTC m=+0.036135804 container died 7a0ad40d0b5e47c22d67384c8a95a89cdbad08fb6257dda56645fdab9928fb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:40:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-936a95e84412947bdaeb742e557d08678db4f46b617fdb81199ad7e5125c594c-merged.mount: Deactivated successfully.
Dec  3 18:40:37 compute-0 nova_compute[348325]: 2025-12-03 18:40:37.497 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:37 compute-0 podman[417419]: 2025-12-03 18:40:37.54826917 +0000 UTC m=+0.186015636 container remove 7a0ad40d0b5e47c22d67384c8a95a89cdbad08fb6257dda56645fdab9928fb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:40:37 compute-0 systemd[1]: libpod-conmon-7a0ad40d0b5e47c22d67384c8a95a89cdbad08fb6257dda56645fdab9928fb8c.scope: Deactivated successfully.
Dec  3 18:40:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:40:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2737782911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:40:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:40:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2737782911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:40:37 compute-0 nova_compute[348325]: 2025-12-03 18:40:37.869 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:38 compute-0 podman[417574]: 2025-12-03 18:40:38.329886616 +0000 UTC m=+0.087552410 container create 9b04a1ea62c45df34ce9073acdb11914f8235437480a9bf7dbede0a0b1c0e628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 18:40:38 compute-0 podman[417574]: 2025-12-03 18:40:38.290757161 +0000 UTC m=+0.048422975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:40:38 compute-0 systemd[1]: Started libpod-conmon-9b04a1ea62c45df34ce9073acdb11914f8235437480a9bf7dbede0a0b1c0e628.scope.
Dec  3 18:40:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:40:38 compute-0 podman[417574]: 2025-12-03 18:40:38.48886957 +0000 UTC m=+0.246535384 container init 9b04a1ea62c45df34ce9073acdb11914f8235437480a9bf7dbede0a0b1c0e628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:40:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:38 compute-0 podman[417574]: 2025-12-03 18:40:38.49702225 +0000 UTC m=+0.254688044 container start 9b04a1ea62c45df34ce9073acdb11914f8235437480a9bf7dbede0a0b1c0e628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:40:38 compute-0 loving_matsumoto[417589]: 167 167
Dec  3 18:40:38 compute-0 systemd[1]: libpod-9b04a1ea62c45df34ce9073acdb11914f8235437480a9bf7dbede0a0b1c0e628.scope: Deactivated successfully.
Dec  3 18:40:38 compute-0 podman[417574]: 2025-12-03 18:40:38.522273177 +0000 UTC m=+0.279939051 container attach 9b04a1ea62c45df34ce9073acdb11914f8235437480a9bf7dbede0a0b1c0e628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_matsumoto, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:40:38 compute-0 podman[417574]: 2025-12-03 18:40:38.522988594 +0000 UTC m=+0.280654388 container died 9b04a1ea62c45df34ce9073acdb11914f8235437480a9bf7dbede0a0b1c0e628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 18:40:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3bbf44913f5cf144d729340002038e294a6e792f6a5b394eabce96d58b4b851-merged.mount: Deactivated successfully.
Dec  3 18:40:38 compute-0 podman[417574]: 2025-12-03 18:40:38.670332784 +0000 UTC m=+0.427998578 container remove 9b04a1ea62c45df34ce9073acdb11914f8235437480a9bf7dbede0a0b1c0e628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:40:38 compute-0 systemd[1]: libpod-conmon-9b04a1ea62c45df34ce9073acdb11914f8235437480a9bf7dbede0a0b1c0e628.scope: Deactivated successfully.
Dec  3 18:40:38 compute-0 podman[417614]: 2025-12-03 18:40:38.920174138 +0000 UTC m=+0.057824823 container create 3370b8ab4fc79a3f7ef2b109f2fe23f27ccb07f0f6690ce68097064020f72280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gould, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:40:38 compute-0 podman[417614]: 2025-12-03 18:40:38.894108931 +0000 UTC m=+0.031759656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:40:39 compute-0 systemd[1]: Started libpod-conmon-3370b8ab4fc79a3f7ef2b109f2fe23f27ccb07f0f6690ce68097064020f72280.scope.
Dec  3 18:40:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:40:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2feb4a6b042411a299f3ef22d855274a1ec9e595d995a26d6c1ee8fe496703/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2feb4a6b042411a299f3ef22d855274a1ec9e595d995a26d6c1ee8fe496703/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2feb4a6b042411a299f3ef22d855274a1ec9e595d995a26d6c1ee8fe496703/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2feb4a6b042411a299f3ef22d855274a1ec9e595d995a26d6c1ee8fe496703/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:39 compute-0 podman[417614]: 2025-12-03 18:40:39.079219334 +0000 UTC m=+0.216870019 container init 3370b8ab4fc79a3f7ef2b109f2fe23f27ccb07f0f6690ce68097064020f72280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:40:39 compute-0 podman[417614]: 2025-12-03 18:40:39.09337075 +0000 UTC m=+0.231021425 container start 3370b8ab4fc79a3f7ef2b109f2fe23f27ccb07f0f6690ce68097064020f72280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:40:39 compute-0 podman[417614]: 2025-12-03 18:40:39.114252 +0000 UTC m=+0.251902675 container attach 3370b8ab4fc79a3f7ef2b109f2fe23f27ccb07f0f6690ce68097064020f72280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gould, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:40:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:39 compute-0 distracted_gould[417630]: {
Dec  3 18:40:39 compute-0 distracted_gould[417630]:    "0": [
Dec  3 18:40:39 compute-0 distracted_gould[417630]:        {
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "devices": [
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "/dev/loop3"
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            ],
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_name": "ceph_lv0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_size": "21470642176",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "name": "ceph_lv0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "tags": {
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.cluster_name": "ceph",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.crush_device_class": "",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.encrypted": "0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.osd_id": "0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.type": "block",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.vdo": "0"
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            },
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "type": "block",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "vg_name": "ceph_vg0"
Dec  3 18:40:39 compute-0 distracted_gould[417630]:        }
Dec  3 18:40:39 compute-0 distracted_gould[417630]:    ],
Dec  3 18:40:39 compute-0 distracted_gould[417630]:    "1": [
Dec  3 18:40:39 compute-0 distracted_gould[417630]:        {
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "devices": [
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "/dev/loop4"
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            ],
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_name": "ceph_lv1",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_size": "21470642176",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "name": "ceph_lv1",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "tags": {
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.cluster_name": "ceph",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.crush_device_class": "",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.encrypted": "0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.osd_id": "1",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.type": "block",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.vdo": "0"
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            },
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "type": "block",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "vg_name": "ceph_vg1"
Dec  3 18:40:39 compute-0 distracted_gould[417630]:        }
Dec  3 18:40:39 compute-0 distracted_gould[417630]:    ],
Dec  3 18:40:39 compute-0 distracted_gould[417630]:    "2": [
Dec  3 18:40:39 compute-0 distracted_gould[417630]:        {
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "devices": [
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "/dev/loop5"
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            ],
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_name": "ceph_lv2",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_size": "21470642176",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "name": "ceph_lv2",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "tags": {
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.cluster_name": "ceph",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.crush_device_class": "",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.encrypted": "0",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.osd_id": "2",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.type": "block",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:                "ceph.vdo": "0"
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            },
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "type": "block",
Dec  3 18:40:39 compute-0 distracted_gould[417630]:            "vg_name": "ceph_vg2"
Dec  3 18:40:39 compute-0 distracted_gould[417630]:        }
Dec  3 18:40:39 compute-0 distracted_gould[417630]:    ]
Dec  3 18:40:39 compute-0 distracted_gould[417630]: }
Dec  3 18:40:39 compute-0 systemd[1]: libpod-3370b8ab4fc79a3f7ef2b109f2fe23f27ccb07f0f6690ce68097064020f72280.scope: Deactivated successfully.
Dec  3 18:40:39 compute-0 podman[417614]: 2025-12-03 18:40:39.886966379 +0000 UTC m=+1.024617054 container died 3370b8ab4fc79a3f7ef2b109f2fe23f27ccb07f0f6690ce68097064020f72280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gould, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:40:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd2feb4a6b042411a299f3ef22d855274a1ec9e595d995a26d6c1ee8fe496703-merged.mount: Deactivated successfully.
Dec  3 18:40:40 compute-0 podman[417614]: 2025-12-03 18:40:40.009699677 +0000 UTC m=+1.147350362 container remove 3370b8ab4fc79a3f7ef2b109f2fe23f27ccb07f0f6690ce68097064020f72280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gould, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:40:40 compute-0 systemd[1]: libpod-conmon-3370b8ab4fc79a3f7ef2b109f2fe23f27ccb07f0f6690ce68097064020f72280.scope: Deactivated successfully.
Dec  3 18:40:40 compute-0 podman[417789]: 2025-12-03 18:40:40.773593221 +0000 UTC m=+0.055713373 container create 3de1eecebb594d15d784aad3f2819bec1eb993e8a3c4f3788c1d395c09440831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gates, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:40:40 compute-0 systemd[1]: Started libpod-conmon-3de1eecebb594d15d784aad3f2819bec1eb993e8a3c4f3788c1d395c09440831.scope.
Dec  3 18:40:40 compute-0 podman[417789]: 2025-12-03 18:40:40.748925798 +0000 UTC m=+0.031045970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:40:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:40:40 compute-0 podman[417789]: 2025-12-03 18:40:40.870803635 +0000 UTC m=+0.152923817 container init 3de1eecebb594d15d784aad3f2819bec1eb993e8a3c4f3788c1d395c09440831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gates, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:40:40 compute-0 podman[417789]: 2025-12-03 18:40:40.883834734 +0000 UTC m=+0.165954886 container start 3de1eecebb594d15d784aad3f2819bec1eb993e8a3c4f3788c1d395c09440831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gates, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:40:40 compute-0 systemd[1]: libpod-3de1eecebb594d15d784aad3f2819bec1eb993e8a3c4f3788c1d395c09440831.scope: Deactivated successfully.
Dec  3 18:40:40 compute-0 blissful_gates[417805]: 167 167
Dec  3 18:40:40 compute-0 podman[417789]: 2025-12-03 18:40:40.889105932 +0000 UTC m=+0.171226114 container attach 3de1eecebb594d15d784aad3f2819bec1eb993e8a3c4f3788c1d395c09440831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:40:40 compute-0 conmon[417805]: conmon 3de1eecebb594d15d784 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3de1eecebb594d15d784aad3f2819bec1eb993e8a3c4f3788c1d395c09440831.scope/container/memory.events
Dec  3 18:40:40 compute-0 podman[417789]: 2025-12-03 18:40:40.891177703 +0000 UTC m=+0.173297855 container died 3de1eecebb594d15d784aad3f2819bec1eb993e8a3c4f3788c1d395c09440831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gates, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-693b3014eb4874ab487577329e141078a4412ec7ad940b60cc58759e5d3e4455-merged.mount: Deactivated successfully.
Dec  3 18:40:41 compute-0 podman[417789]: 2025-12-03 18:40:41.057814755 +0000 UTC m=+0.339934907 container remove 3de1eecebb594d15d784aad3f2819bec1eb993e8a3c4f3788c1d395c09440831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gates, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:40:41 compute-0 systemd[1]: libpod-conmon-3de1eecebb594d15d784aad3f2819bec1eb993e8a3c4f3788c1d395c09440831.scope: Deactivated successfully.
Dec  3 18:40:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:41 compute-0 podman[417828]: 2025-12-03 18:40:41.322927842 +0000 UTC m=+0.083671966 container create b895b6f720a99b08e553130255e3fc204f1f5575bad9e52a52d93fa8bcdfc65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_sanderson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:40:41 compute-0 podman[417828]: 2025-12-03 18:40:41.282051943 +0000 UTC m=+0.042796087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:40:41 compute-0 systemd[1]: Started libpod-conmon-b895b6f720a99b08e553130255e3fc204f1f5575bad9e52a52d93fa8bcdfc65e.scope.
Dec  3 18:40:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae79b82590c691cf6bcd367d1813167b31b1699a6945ec13e0ff41102753e7da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae79b82590c691cf6bcd367d1813167b31b1699a6945ec13e0ff41102753e7da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae79b82590c691cf6bcd367d1813167b31b1699a6945ec13e0ff41102753e7da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae79b82590c691cf6bcd367d1813167b31b1699a6945ec13e0ff41102753e7da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:40:41 compute-0 podman[417828]: 2025-12-03 18:40:41.55207108 +0000 UTC m=+0.312815224 container init b895b6f720a99b08e553130255e3fc204f1f5575bad9e52a52d93fa8bcdfc65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:40:41 compute-0 podman[417828]: 2025-12-03 18:40:41.565708294 +0000 UTC m=+0.326452448 container start b895b6f720a99b08e553130255e3fc204f1f5575bad9e52a52d93fa8bcdfc65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:40:41 compute-0 podman[417828]: 2025-12-03 18:40:41.572070469 +0000 UTC m=+0.332814633 container attach b895b6f720a99b08e553130255e3fc204f1f5575bad9e52a52d93fa8bcdfc65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:40:42 compute-0 nova_compute[348325]: 2025-12-03 18:40:42.499 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]: {
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "osd_id": 1,
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "type": "bluestore"
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:    },
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "osd_id": 2,
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "type": "bluestore"
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:    },
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "osd_id": 0,
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:        "type": "bluestore"
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]:    }
Dec  3 18:40:42 compute-0 nifty_sanderson[417845]: }
Dec  3 18:40:42 compute-0 systemd[1]: libpod-b895b6f720a99b08e553130255e3fc204f1f5575bad9e52a52d93fa8bcdfc65e.scope: Deactivated successfully.
Dec  3 18:40:42 compute-0 systemd[1]: libpod-b895b6f720a99b08e553130255e3fc204f1f5575bad9e52a52d93fa8bcdfc65e.scope: Consumed 1.033s CPU time.
Dec  3 18:40:42 compute-0 podman[417878]: 2025-12-03 18:40:42.662231233 +0000 UTC m=+0.031079581 container died b895b6f720a99b08e553130255e3fc204f1f5575bad9e52a52d93fa8bcdfc65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:40:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae79b82590c691cf6bcd367d1813167b31b1699a6945ec13e0ff41102753e7da-merged.mount: Deactivated successfully.
Dec  3 18:40:42 compute-0 podman[417878]: 2025-12-03 18:40:42.832829632 +0000 UTC m=+0.201677960 container remove b895b6f720a99b08e553130255e3fc204f1f5575bad9e52a52d93fa8bcdfc65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_sanderson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:40:42 compute-0 systemd[1]: libpod-conmon-b895b6f720a99b08e553130255e3fc204f1f5575bad9e52a52d93fa8bcdfc65e.scope: Deactivated successfully.
Dec  3 18:40:42 compute-0 nova_compute[348325]: 2025-12-03 18:40:42.871 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:40:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:40:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:40:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:43 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:40:43 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 03b2fe9d-00ae-46ef-9956-440b693e9cd5 does not exist
Dec  3 18:40:43 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c55d289d-4d83-4787-9229-275a72555229 does not exist
Dec  3 18:40:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:40:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:40:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:40:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:40:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:40:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:40:44 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:40:44 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:40:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:47 compute-0 nova_compute[348325]: 2025-12-03 18:40:47.503 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:47 compute-0 nova_compute[348325]: 2025-12-03 18:40:47.874 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:47 compute-0 podman[417943]: 2025-12-03 18:40:47.916790832 +0000 UTC m=+0.080014287 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:40:47 compute-0 podman[417944]: 2025-12-03 18:40:47.933707765 +0000 UTC m=+0.087616652 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, io.buildah.version=1.33.7, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git)
Dec  3 18:40:47 compute-0 podman[417942]: 2025-12-03 18:40:47.952999366 +0000 UTC m=+0.114504628 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 18:40:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:49 compute-0 podman[418002]: 2025-12-03 18:40:49.932632722 +0000 UTC m=+0.098911207 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 18:40:50 compute-0 podman[418020]: 2025-12-03 18:40:50.087933816 +0000 UTC m=+0.102999167 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec  3 18:40:50 compute-0 podman[418019]: 2025-12-03 18:40:50.098122875 +0000 UTC m=+0.104830531 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., release-0.7.12=, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Dec  3 18:40:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:52 compute-0 nova_compute[348325]: 2025-12-03 18:40:52.507 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:52 compute-0 nova_compute[348325]: 2025-12-03 18:40:52.877 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:57 compute-0 nova_compute[348325]: 2025-12-03 18:40:57.510 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:57 compute-0 nova_compute[348325]: 2025-12-03 18:40:57.880 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:40:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:40:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:40:59 compute-0 podman[158200]: time="2025-12-03T18:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:40:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:40:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8633 "" "Go-http-client/1.1"
Dec  3 18:40:59 compute-0 podman[418055]: 2025-12-03 18:40:59.92154424 +0000 UTC m=+0.088779081 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:41:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:01 compute-0 openstack_network_exporter[365222]: ERROR   18:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:41:01 compute-0 openstack_network_exporter[365222]: ERROR   18:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:41:01 compute-0 openstack_network_exporter[365222]: ERROR   18:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:41:01 compute-0 openstack_network_exporter[365222]: ERROR   18:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:41:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:41:01 compute-0 openstack_network_exporter[365222]: ERROR   18:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:41:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:41:02 compute-0 nova_compute[348325]: 2025-12-03 18:41:02.510 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:02 compute-0 nova_compute[348325]: 2025-12-03 18:41:02.882 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:03 compute-0 podman[418079]: 2025-12-03 18:41:03.937506527 +0000 UTC m=+0.095911315 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 18:41:03 compute-0 podman[418078]: 2025-12-03 18:41:03.95156972 +0000 UTC m=+0.118142068 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  3 18:41:05 compute-0 nova_compute[348325]: 2025-12-03 18:41:05.076 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:41:05 compute-0 nova_compute[348325]: 2025-12-03 18:41:05.076 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:41:05 compute-0 nova_compute[348325]: 2025-12-03 18:41:05.077 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:41:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:05 compute-0 nova_compute[348325]: 2025-12-03 18:41:05.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:41:05 compute-0 nova_compute[348325]: 2025-12-03 18:41:05.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:41:06 compute-0 nova_compute[348325]: 2025-12-03 18:41:06.070 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:41:06 compute-0 nova_compute[348325]: 2025-12-03 18:41:06.078 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:41:06 compute-0 nova_compute[348325]: 2025-12-03 18:41:06.081 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:41:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:07 compute-0 nova_compute[348325]: 2025-12-03 18:41:07.515 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:07 compute-0 nova_compute[348325]: 2025-12-03 18:41:07.857 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updating instance_info_cache with network_info: [{"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:41:07 compute-0 nova_compute[348325]: 2025-12-03 18:41:07.881 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:41:07 compute-0 nova_compute[348325]: 2025-12-03 18:41:07.881 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:41:07 compute-0 nova_compute[348325]: 2025-12-03 18:41:07.882 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:41:07 compute-0 nova_compute[348325]: 2025-12-03 18:41:07.883 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:41:07 compute-0 nova_compute[348325]: 2025-12-03 18:41:07.883 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:41:07 compute-0 nova_compute[348325]: 2025-12-03 18:41:07.884 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:41:07 compute-0 nova_compute[348325]: 2025-12-03 18:41:07.884 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:41:07 compute-0 nova_compute[348325]: 2025-12-03 18:41:07.885 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:12 compute-0 nova_compute[348325]: 2025-12-03 18:41:12.517 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:12 compute-0 nova_compute[348325]: 2025-12-03 18:41:12.887 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.248 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.249 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.250 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.259 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1ca1fbdb-089c-4544-821e-0542089b8424', 'name': 'test_0', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.264 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'df72d527-943e-4e8c-b62a-63afa5f18261', 'name': 'vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.264 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.265 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T18:41:13.265254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.273 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.278 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.279 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.279 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.279 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.279 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.279 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.279 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes volume: 2244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.279 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.bytes volume: 4652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.280 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.280 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T18:41:13.279595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.280 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.280 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.280 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.280 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.280 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets volume: 39 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T18:41:13.280660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.281 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.281 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.281 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.281 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.281 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.281 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.282 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.282 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T18:41:13.281731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.282 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.282 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.282 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.283 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.283 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.283 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.bytes volume: 4807 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T18:41:13.282973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.283 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.283 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.283 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.284 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.284 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T18:41:13.284030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.306 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.307 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.307 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.331 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.332 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.332 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.333 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.334 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.334 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T18:41:13.333909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.335 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T18:41:13.335580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.358 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/memory.usage volume: 48.94921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.393 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/memory.usage volume: 49.08984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.394 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.394 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.394 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.395 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.395 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T18:41:13.394819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.396 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.396 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.397 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.398 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.398 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T18:41:13.398167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.398 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.399 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.400 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.400 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.401 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.401 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.402 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.403 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T18:41:13.403667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 nova_compute[348325]: 2025-12-03 18:41:13.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:41:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.502 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.503 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.503 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 nova_compute[348325]: 2025-12-03 18:41:13.523 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:41:13 compute-0 nova_compute[348325]: 2025-12-03 18:41:13.524 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:41:13 compute-0 nova_compute[348325]: 2025-12-03 18:41:13.525 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:41:13 compute-0 nova_compute[348325]: 2025-12-03 18:41:13.525 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:41:13 compute-0 nova_compute[348325]: 2025-12-03 18:41:13.526 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.602 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.603 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.603 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.605 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.605 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.606 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.606 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.606 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.606 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.607 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.607 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 1682579508 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.608 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 260360075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.609 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 147233249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.610 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 1698039964 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.611 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T18:41:13.607028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.611 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 224294548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.611 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 159520694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.613 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.613 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T18:41:13.613642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.614 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.615 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.615 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.616 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.616 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.618 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.618 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.619 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.620 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.620 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.621 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.621 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.622 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.623 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T18:41:13.619277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T18:41:13.624824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.625 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.625 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.626 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.626 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.627 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.627 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.628 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.629 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T18:41:13.629392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.630 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.631 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.632 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 6303799002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T18:41:13.632129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.633 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 23959545 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.633 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.633 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 9999121595 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.633 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 29522381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.634 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.634 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.635 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.635 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.636 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T18:41:13.635420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.636 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.636 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.637 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.637 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.637 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.638 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.639 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.639 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T18:41:13.639216) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.641 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.642 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.642 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.642 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.642 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.643 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T18:41:13.641347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T18:41:13.642934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.643 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.644 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T18:41:13.644897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.646 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.646 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.646 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.646 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.646 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T18:41:13.646361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.647 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.647 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.648 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.648 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/cpu volume: 38180000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T18:41:13.648117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.648 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/cpu volume: 164440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.649 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:41:13.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:41:13
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['images', 'default.rgw.log', 'vms', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta']
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:41:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:41:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:41:14 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399308568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.036 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.116 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.116 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.117 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.125 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.125 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.126 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:41:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:41:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:41:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:41:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:41:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:41:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:41:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:41:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:41:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:41:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.531 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.533 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3790MB free_disk=59.922000885009766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.533 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.534 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.628 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.629 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance df72d527-943e-4e8c-b62a-63afa5f18261 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.629 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.630 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:41:14 compute-0 nova_compute[348325]: 2025-12-03 18:41:14.695 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:41:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:41:15 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451123891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:41:15 compute-0 nova_compute[348325]: 2025-12-03 18:41:15.141 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:41:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:15 compute-0 nova_compute[348325]: 2025-12-03 18:41:15.148 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:41:15 compute-0 nova_compute[348325]: 2025-12-03 18:41:15.314 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:41:15 compute-0 nova_compute[348325]: 2025-12-03 18:41:15.315 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:41:15 compute-0 nova_compute[348325]: 2025-12-03 18:41:15.316 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:41:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:17 compute-0 nova_compute[348325]: 2025-12-03 18:41:17.519 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:17 compute-0 nova_compute[348325]: 2025-12-03 18:41:17.890 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:18 compute-0 podman[418166]: 2025-12-03 18:41:18.917489065 +0000 UTC m=+0.073263292 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 18:41:18 compute-0 podman[418164]: 2025-12-03 18:41:18.924212718 +0000 UTC m=+0.090131822 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Dec  3 18:41:18 compute-0 podman[418165]: 2025-12-03 18:41:18.926905064 +0000 UTC m=+0.085849258 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:41:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:20 compute-0 podman[418226]: 2025-12-03 18:41:20.914078575 +0000 UTC m=+0.082210190 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, maintainer=Red Hat, Inc., release=1214.1726694543, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release-0.7.12=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Dec  3 18:41:20 compute-0 podman[418227]: 2025-12-03 18:41:20.925175186 +0000 UTC m=+0.087298755 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  3 18:41:20 compute-0 podman[418228]: 2025-12-03 18:41:20.936325348 +0000 UTC m=+0.091424935 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:41:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:22 compute-0 nova_compute[348325]: 2025-12-03 18:41:22.520 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:22 compute-0 nova_compute[348325]: 2025-12-03 18:41:22.892 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:41:23.337 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:41:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:41:23.338 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:41:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:41:23.339 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:41:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011048885483818454 of space, bias 1.0, pg target 0.33146656451455364 quantized to 32 (current 32)
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:41:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:41:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:27 compute-0 nova_compute[348325]: 2025-12-03 18:41:27.523 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:27 compute-0 nova_compute[348325]: 2025-12-03 18:41:27.895 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:29 compute-0 podman[158200]: time="2025-12-03T18:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:41:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:41:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8641 "" "Go-http-client/1.1"
Dec  3 18:41:30 compute-0 podman[418283]: 2025-12-03 18:41:30.91849662 +0000 UTC m=+0.081795809 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:41:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:31 compute-0 openstack_network_exporter[365222]: ERROR   18:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:41:31 compute-0 openstack_network_exporter[365222]: ERROR   18:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:41:31 compute-0 openstack_network_exporter[365222]: ERROR   18:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:41:31 compute-0 openstack_network_exporter[365222]: ERROR   18:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:41:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:41:31 compute-0 openstack_network_exporter[365222]: ERROR   18:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:41:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:41:32 compute-0 nova_compute[348325]: 2025-12-03 18:41:32.525 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:32 compute-0 nova_compute[348325]: 2025-12-03 18:41:32.897 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:34 compute-0 podman[418309]: 2025-12-03 18:41:34.897657098 +0000 UTC m=+0.070847402 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  3 18:41:34 compute-0 podman[418308]: 2025-12-03 18:41:34.932356116 +0000 UTC m=+0.106166605 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 18:41:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:37 compute-0 nova_compute[348325]: 2025-12-03 18:41:37.527 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:41:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4229259109' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:41:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:41:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4229259109' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:41:37 compute-0 nova_compute[348325]: 2025-12-03 18:41:37.898 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:42 compute-0 nova_compute[348325]: 2025-12-03 18:41:42.531 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:42 compute-0 nova_compute[348325]: 2025-12-03 18:41:42.901 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:41:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:41:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:41:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:41:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:41:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:41:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:41:44 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:41:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:41:44 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:41:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:41:44 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:41:44 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 66e19bd2-ec79-4ee1-b261-8ed0f0a7c98e does not exist
Dec  3 18:41:44 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f474d56f-d28e-4dda-96cd-00bd517c9aa7 does not exist
Dec  3 18:41:44 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 1c0ea946-4dba-42af-bfb9-8384b457095f does not exist
Dec  3 18:41:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:41:44 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:41:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:41:44 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:41:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:41:44 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:41:45 compute-0 podman[418617]: 2025-12-03 18:41:45.126056527 +0000 UTC m=+0.064240861 container create 37b4b0367d6579af85f4a533b4d0870ae5f0a982e259880f6193252c0dcc85d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:41:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:45 compute-0 systemd[1]: Started libpod-conmon-37b4b0367d6579af85f4a533b4d0870ae5f0a982e259880f6193252c0dcc85d9.scope.
Dec  3 18:41:45 compute-0 podman[418617]: 2025-12-03 18:41:45.105581597 +0000 UTC m=+0.043765951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:41:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:41:45 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:41:45 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:41:45 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:41:45 compute-0 podman[418617]: 2025-12-03 18:41:45.263527675 +0000 UTC m=+0.201712029 container init 37b4b0367d6579af85f4a533b4d0870ae5f0a982e259880f6193252c0dcc85d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:41:45 compute-0 podman[418617]: 2025-12-03 18:41:45.273990681 +0000 UTC m=+0.212175015 container start 37b4b0367d6579af85f4a533b4d0870ae5f0a982e259880f6193252c0dcc85d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:41:45 compute-0 podman[418617]: 2025-12-03 18:41:45.278927921 +0000 UTC m=+0.217112275 container attach 37b4b0367d6579af85f4a533b4d0870ae5f0a982e259880f6193252c0dcc85d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 18:41:45 compute-0 romantic_rubin[418633]: 167 167
Dec  3 18:41:45 compute-0 systemd[1]: libpod-37b4b0367d6579af85f4a533b4d0870ae5f0a982e259880f6193252c0dcc85d9.scope: Deactivated successfully.
Dec  3 18:41:45 compute-0 conmon[418633]: conmon 37b4b0367d6579af85f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37b4b0367d6579af85f4a533b4d0870ae5f0a982e259880f6193252c0dcc85d9.scope/container/memory.events
Dec  3 18:41:45 compute-0 podman[418617]: 2025-12-03 18:41:45.288400983 +0000 UTC m=+0.226585317 container died 37b4b0367d6579af85f4a533b4d0870ae5f0a982e259880f6193252c0dcc85d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 18:41:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-75d86b86a77ce2090a7a2e7222c42ad010335b32d2f792c0bec87623d23eebde-merged.mount: Deactivated successfully.
Dec  3 18:41:45 compute-0 podman[418617]: 2025-12-03 18:41:45.337280537 +0000 UTC m=+0.275464871 container remove 37b4b0367d6579af85f4a533b4d0870ae5f0a982e259880f6193252c0dcc85d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 18:41:45 compute-0 systemd[1]: libpod-conmon-37b4b0367d6579af85f4a533b4d0870ae5f0a982e259880f6193252c0dcc85d9.scope: Deactivated successfully.
Dec  3 18:41:45 compute-0 podman[418656]: 2025-12-03 18:41:45.635676808 +0000 UTC m=+0.097611387 container create 8da7a4bc1843361bc8e33a90a91f53998e0298bd53d723e6716a271e3a36fd3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:41:45 compute-0 podman[418656]: 2025-12-03 18:41:45.603043781 +0000 UTC m=+0.064978450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:41:45 compute-0 systemd[1]: Started libpod-conmon-8da7a4bc1843361bc8e33a90a91f53998e0298bd53d723e6716a271e3a36fd3f.scope.
Dec  3 18:41:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a819ba3e2bac39f90c6c526ac06d39fc3da57c779f16b1b7245ff76fb1caa7a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a819ba3e2bac39f90c6c526ac06d39fc3da57c779f16b1b7245ff76fb1caa7a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a819ba3e2bac39f90c6c526ac06d39fc3da57c779f16b1b7245ff76fb1caa7a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a819ba3e2bac39f90c6c526ac06d39fc3da57c779f16b1b7245ff76fb1caa7a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a819ba3e2bac39f90c6c526ac06d39fc3da57c779f16b1b7245ff76fb1caa7a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:45 compute-0 podman[418656]: 2025-12-03 18:41:45.850918196 +0000 UTC m=+0.312852765 container init 8da7a4bc1843361bc8e33a90a91f53998e0298bd53d723e6716a271e3a36fd3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 18:41:45 compute-0 podman[418656]: 2025-12-03 18:41:45.863048613 +0000 UTC m=+0.324983222 container start 8da7a4bc1843361bc8e33a90a91f53998e0298bd53d723e6716a271e3a36fd3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 18:41:45 compute-0 podman[418656]: 2025-12-03 18:41:45.871057888 +0000 UTC m=+0.332992477 container attach 8da7a4bc1843361bc8e33a90a91f53998e0298bd53d723e6716a271e3a36fd3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 18:41:47 compute-0 happy_cohen[418672]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:41:47 compute-0 happy_cohen[418672]: --> relative data size: 1.0
Dec  3 18:41:47 compute-0 happy_cohen[418672]: --> All data devices are unavailable
Dec  3 18:41:47 compute-0 systemd[1]: libpod-8da7a4bc1843361bc8e33a90a91f53998e0298bd53d723e6716a271e3a36fd3f.scope: Deactivated successfully.
Dec  3 18:41:47 compute-0 systemd[1]: libpod-8da7a4bc1843361bc8e33a90a91f53998e0298bd53d723e6716a271e3a36fd3f.scope: Consumed 1.161s CPU time.
Dec  3 18:41:47 compute-0 podman[418656]: 2025-12-03 18:41:47.096022436 +0000 UTC m=+1.557957015 container died 8da7a4bc1843361bc8e33a90a91f53998e0298bd53d723e6716a271e3a36fd3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a819ba3e2bac39f90c6c526ac06d39fc3da57c779f16b1b7245ff76fb1caa7a9-merged.mount: Deactivated successfully.
Dec  3 18:41:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:47 compute-0 podman[418656]: 2025-12-03 18:41:47.179834554 +0000 UTC m=+1.641769113 container remove 8da7a4bc1843361bc8e33a90a91f53998e0298bd53d723e6716a271e3a36fd3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 18:41:47 compute-0 systemd[1]: libpod-conmon-8da7a4bc1843361bc8e33a90a91f53998e0298bd53d723e6716a271e3a36fd3f.scope: Deactivated successfully.
Dec  3 18:41:47 compute-0 nova_compute[348325]: 2025-12-03 18:41:47.533 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:47 compute-0 nova_compute[348325]: 2025-12-03 18:41:47.904 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:47 compute-0 podman[418851]: 2025-12-03 18:41:47.969759393 +0000 UTC m=+0.100673820 container create 674834e9e7d79b01082faf879a3edaaed3402135d9eb1524288f1457868c4d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poitras, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 18:41:47 compute-0 podman[418851]: 2025-12-03 18:41:47.896722839 +0000 UTC m=+0.027637286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:41:48 compute-0 systemd[1]: Started libpod-conmon-674834e9e7d79b01082faf879a3edaaed3402135d9eb1524288f1457868c4d37.scope.
Dec  3 18:41:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:41:48 compute-0 podman[418851]: 2025-12-03 18:41:48.087952222 +0000 UTC m=+0.218866669 container init 674834e9e7d79b01082faf879a3edaaed3402135d9eb1524288f1457868c4d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:41:48 compute-0 podman[418851]: 2025-12-03 18:41:48.097671299 +0000 UTC m=+0.228585736 container start 674834e9e7d79b01082faf879a3edaaed3402135d9eb1524288f1457868c4d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poitras, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:41:48 compute-0 podman[418851]: 2025-12-03 18:41:48.102579189 +0000 UTC m=+0.233493646 container attach 674834e9e7d79b01082faf879a3edaaed3402135d9eb1524288f1457868c4d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poitras, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:41:48 compute-0 adoring_poitras[418868]: 167 167
Dec  3 18:41:48 compute-0 systemd[1]: libpod-674834e9e7d79b01082faf879a3edaaed3402135d9eb1524288f1457868c4d37.scope: Deactivated successfully.
Dec  3 18:41:48 compute-0 conmon[418868]: conmon 674834e9e7d79b01082f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-674834e9e7d79b01082faf879a3edaaed3402135d9eb1524288f1457868c4d37.scope/container/memory.events
Dec  3 18:41:48 compute-0 podman[418851]: 2025-12-03 18:41:48.111372723 +0000 UTC m=+0.242287150 container died 674834e9e7d79b01082faf879a3edaaed3402135d9eb1524288f1457868c4d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 18:41:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f79572407c6b03ecb38de273d401dd7a5ed1b085791880a132ca9b8271e4556c-merged.mount: Deactivated successfully.
Dec  3 18:41:48 compute-0 podman[418851]: 2025-12-03 18:41:48.178958695 +0000 UTC m=+0.309873132 container remove 674834e9e7d79b01082faf879a3edaaed3402135d9eb1524288f1457868c4d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poitras, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:41:48 compute-0 systemd[1]: libpod-conmon-674834e9e7d79b01082faf879a3edaaed3402135d9eb1524288f1457868c4d37.scope: Deactivated successfully.
Dec  3 18:41:48 compute-0 podman[418892]: 2025-12-03 18:41:48.367821979 +0000 UTC m=+0.039648140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:41:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:48 compute-0 podman[418892]: 2025-12-03 18:41:48.58193013 +0000 UTC m=+0.253756301 container create d2de13a0289aed1019ded7d62a0cd5906c1d7843869dbd09a18437caf0c379ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:41:48 compute-0 systemd[1]: Started libpod-conmon-d2de13a0289aed1019ded7d62a0cd5906c1d7843869dbd09a18437caf0c379ac.scope.
Dec  3 18:41:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:41:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ef8c18e67c18c03ad1f8994b1e3c62bcffffef6bda6d5d987ac56fcc477309/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ef8c18e67c18c03ad1f8994b1e3c62bcffffef6bda6d5d987ac56fcc477309/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ef8c18e67c18c03ad1f8994b1e3c62bcffffef6bda6d5d987ac56fcc477309/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ef8c18e67c18c03ad1f8994b1e3c62bcffffef6bda6d5d987ac56fcc477309/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:48 compute-0 podman[418892]: 2025-12-03 18:41:48.741416136 +0000 UTC m=+0.413242307 container init d2de13a0289aed1019ded7d62a0cd5906c1d7843869dbd09a18437caf0c379ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pascal, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 18:41:48 compute-0 podman[418892]: 2025-12-03 18:41:48.758591207 +0000 UTC m=+0.430417348 container start d2de13a0289aed1019ded7d62a0cd5906c1d7843869dbd09a18437caf0c379ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pascal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:41:48 compute-0 podman[418892]: 2025-12-03 18:41:48.765231128 +0000 UTC m=+0.437057269 container attach d2de13a0289aed1019ded7d62a0cd5906c1d7843869dbd09a18437caf0c379ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pascal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:41:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:49 compute-0 happy_pascal[418908]: {
Dec  3 18:41:49 compute-0 happy_pascal[418908]:    "0": [
Dec  3 18:41:49 compute-0 happy_pascal[418908]:        {
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "devices": [
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "/dev/loop3"
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            ],
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_name": "ceph_lv0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_size": "21470642176",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "name": "ceph_lv0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "tags": {
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.cluster_name": "ceph",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.crush_device_class": "",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.encrypted": "0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.osd_id": "0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.type": "block",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.vdo": "0"
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            },
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "type": "block",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "vg_name": "ceph_vg0"
Dec  3 18:41:49 compute-0 happy_pascal[418908]:        }
Dec  3 18:41:49 compute-0 happy_pascal[418908]:    ],
Dec  3 18:41:49 compute-0 happy_pascal[418908]:    "1": [
Dec  3 18:41:49 compute-0 happy_pascal[418908]:        {
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "devices": [
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "/dev/loop4"
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            ],
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_name": "ceph_lv1",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_size": "21470642176",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "name": "ceph_lv1",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "tags": {
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.cluster_name": "ceph",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.crush_device_class": "",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.encrypted": "0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.osd_id": "1",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.type": "block",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.vdo": "0"
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            },
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "type": "block",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "vg_name": "ceph_vg1"
Dec  3 18:41:49 compute-0 happy_pascal[418908]:        }
Dec  3 18:41:49 compute-0 happy_pascal[418908]:    ],
Dec  3 18:41:49 compute-0 happy_pascal[418908]:    "2": [
Dec  3 18:41:49 compute-0 happy_pascal[418908]:        {
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "devices": [
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "/dev/loop5"
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            ],
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_name": "ceph_lv2",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_size": "21470642176",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "name": "ceph_lv2",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "tags": {
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.cluster_name": "ceph",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.crush_device_class": "",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.encrypted": "0",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.osd_id": "2",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.type": "block",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:                "ceph.vdo": "0"
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            },
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "type": "block",
Dec  3 18:41:49 compute-0 happy_pascal[418908]:            "vg_name": "ceph_vg2"
Dec  3 18:41:49 compute-0 happy_pascal[418908]:        }
Dec  3 18:41:49 compute-0 happy_pascal[418908]:    ]
Dec  3 18:41:49 compute-0 happy_pascal[418908]: }
Dec  3 18:41:49 compute-0 systemd[1]: libpod-d2de13a0289aed1019ded7d62a0cd5906c1d7843869dbd09a18437caf0c379ac.scope: Deactivated successfully.
Dec  3 18:41:49 compute-0 podman[418892]: 2025-12-03 18:41:49.603251583 +0000 UTC m=+1.275077714 container died d2de13a0289aed1019ded7d62a0cd5906c1d7843869dbd09a18437caf0c379ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:41:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-99ef8c18e67c18c03ad1f8994b1e3c62bcffffef6bda6d5d987ac56fcc477309-merged.mount: Deactivated successfully.
Dec  3 18:41:49 compute-0 podman[418892]: 2025-12-03 18:41:49.679011164 +0000 UTC m=+1.350837295 container remove d2de13a0289aed1019ded7d62a0cd5906c1d7843869dbd09a18437caf0c379ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pascal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:41:49 compute-0 systemd[1]: libpod-conmon-d2de13a0289aed1019ded7d62a0cd5906c1d7843869dbd09a18437caf0c379ac.scope: Deactivated successfully.
Dec  3 18:41:49 compute-0 podman[418926]: 2025-12-03 18:41:49.738698512 +0000 UTC m=+0.099293147 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:41:49 compute-0 podman[418927]: 2025-12-03 18:41:49.761228553 +0000 UTC m=+0.117497402 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 18:41:49 compute-0 podman[418919]: 2025-12-03 18:41:49.763946839 +0000 UTC m=+0.123995241 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:41:50 compute-0 podman[419124]: 2025-12-03 18:41:50.412866613 +0000 UTC m=+0.041538496 container create 5382beef71bf68f6c79fbbf51900dc72cb81820cecef210f01d8b78776b4d990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:41:50 compute-0 podman[419124]: 2025-12-03 18:41:50.395306994 +0000 UTC m=+0.023978897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:41:50 compute-0 systemd[1]: Started libpod-conmon-5382beef71bf68f6c79fbbf51900dc72cb81820cecef210f01d8b78776b4d990.scope.
Dec  3 18:41:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:41:50 compute-0 podman[419124]: 2025-12-03 18:41:50.626208825 +0000 UTC m=+0.254880798 container init 5382beef71bf68f6c79fbbf51900dc72cb81820cecef210f01d8b78776b4d990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_banach, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:41:50 compute-0 podman[419124]: 2025-12-03 18:41:50.642307989 +0000 UTC m=+0.270979872 container start 5382beef71bf68f6c79fbbf51900dc72cb81820cecef210f01d8b78776b4d990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 18:41:50 compute-0 podman[419124]: 2025-12-03 18:41:50.646955832 +0000 UTC m=+0.275627805 container attach 5382beef71bf68f6c79fbbf51900dc72cb81820cecef210f01d8b78776b4d990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_banach, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:41:50 compute-0 vigorous_banach[419140]: 167 167
Dec  3 18:41:50 compute-0 systemd[1]: libpod-5382beef71bf68f6c79fbbf51900dc72cb81820cecef210f01d8b78776b4d990.scope: Deactivated successfully.
Dec  3 18:41:50 compute-0 podman[419124]: 2025-12-03 18:41:50.658313379 +0000 UTC m=+0.286985292 container died 5382beef71bf68f6c79fbbf51900dc72cb81820cecef210f01d8b78776b4d990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_banach, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4628af12c4d6e7c526d5551cd5052dd5281ef6972a4a1d20e0295df0df1bdc15-merged.mount: Deactivated successfully.
Dec  3 18:41:50 compute-0 podman[419124]: 2025-12-03 18:41:50.773294149 +0000 UTC m=+0.401966042 container remove 5382beef71bf68f6c79fbbf51900dc72cb81820cecef210f01d8b78776b4d990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:41:50 compute-0 systemd[1]: libpod-conmon-5382beef71bf68f6c79fbbf51900dc72cb81820cecef210f01d8b78776b4d990.scope: Deactivated successfully.
Dec  3 18:41:50 compute-0 podman[419163]: 2025-12-03 18:41:50.985258948 +0000 UTC m=+0.057937796 container create f5fbd7499d95cd35a1257d2709f75d41105e644eeafd691b5d0de6f701a0d139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:41:51 compute-0 systemd[1]: Started libpod-conmon-f5fbd7499d95cd35a1257d2709f75d41105e644eeafd691b5d0de6f701a0d139.scope.
Dec  3 18:41:51 compute-0 podman[419163]: 2025-12-03 18:41:50.957345966 +0000 UTC m=+0.030024864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:41:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b38a0c72a91f9a768ecbf61eab26dd57a5faf9b933bd1743e7016e2df4bd08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b38a0c72a91f9a768ecbf61eab26dd57a5faf9b933bd1743e7016e2df4bd08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b38a0c72a91f9a768ecbf61eab26dd57a5faf9b933bd1743e7016e2df4bd08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b38a0c72a91f9a768ecbf61eab26dd57a5faf9b933bd1743e7016e2df4bd08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:41:51 compute-0 podman[419163]: 2025-12-03 18:41:51.103514276 +0000 UTC m=+0.176193154 container init f5fbd7499d95cd35a1257d2709f75d41105e644eeafd691b5d0de6f701a0d139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:41:51 compute-0 podman[419163]: 2025-12-03 18:41:51.120280886 +0000 UTC m=+0.192959734 container start f5fbd7499d95cd35a1257d2709f75d41105e644eeafd691b5d0de6f701a0d139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:41:51 compute-0 podman[419163]: 2025-12-03 18:41:51.125462843 +0000 UTC m=+0.198141691 container attach f5fbd7499d95cd35a1257d2709f75d41105e644eeafd691b5d0de6f701a0d139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Dec  3 18:41:51 compute-0 podman[419176]: 2025-12-03 18:41:51.128503947 +0000 UTC m=+0.100350532 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2024-09-18T21:23:30, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 18:41:51 compute-0 podman[419178]: 2025-12-03 18:41:51.135650342 +0000 UTC m=+0.107387544 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi)
Dec  3 18:41:51 compute-0 podman[419180]: 2025-12-03 18:41:51.14087618 +0000 UTC m=+0.101026550 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:41:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:52 compute-0 bold_agnesi[419204]: {
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "osd_id": 1,
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "type": "bluestore"
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:    },
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "osd_id": 2,
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "type": "bluestore"
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:    },
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "osd_id": 0,
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:        "type": "bluestore"
Dec  3 18:41:52 compute-0 bold_agnesi[419204]:    }
Dec  3 18:41:52 compute-0 bold_agnesi[419204]: }
Dec  3 18:41:52 compute-0 systemd[1]: libpod-f5fbd7499d95cd35a1257d2709f75d41105e644eeafd691b5d0de6f701a0d139.scope: Deactivated successfully.
Dec  3 18:41:52 compute-0 systemd[1]: libpod-f5fbd7499d95cd35a1257d2709f75d41105e644eeafd691b5d0de6f701a0d139.scope: Consumed 1.124s CPU time.
Dec  3 18:41:52 compute-0 conmon[419204]: conmon f5fbd7499d95cd35a125 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5fbd7499d95cd35a1257d2709f75d41105e644eeafd691b5d0de6f701a0d139.scope/container/memory.events
Dec  3 18:41:52 compute-0 podman[419163]: 2025-12-03 18:41:52.26441411 +0000 UTC m=+1.337092958 container died f5fbd7499d95cd35a1257d2709f75d41105e644eeafd691b5d0de6f701a0d139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-02b38a0c72a91f9a768ecbf61eab26dd57a5faf9b933bd1743e7016e2df4bd08-merged.mount: Deactivated successfully.
Dec  3 18:41:52 compute-0 podman[419163]: 2025-12-03 18:41:52.358682863 +0000 UTC m=+1.431361711 container remove f5fbd7499d95cd35a1257d2709f75d41105e644eeafd691b5d0de6f701a0d139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:41:52 compute-0 systemd[1]: libpod-conmon-f5fbd7499d95cd35a1257d2709f75d41105e644eeafd691b5d0de6f701a0d139.scope: Deactivated successfully.
Dec  3 18:41:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:41:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:41:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:41:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:41:52 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9890e785-cf88-48a3-a819-a3facd421366 does not exist
Dec  3 18:41:52 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3ada7c9e-a0bc-4e54-bc1c-b151721f3dc3 does not exist
Dec  3 18:41:52 compute-0 nova_compute[348325]: 2025-12-03 18:41:52.536 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:52 compute-0 nova_compute[348325]: 2025-12-03 18:41:52.906 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:41:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:41:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:57 compute-0 nova_compute[348325]: 2025-12-03 18:41:57.540 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:57 compute-0 nova_compute[348325]: 2025-12-03 18:41:57.909 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:41:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:41:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:41:59 compute-0 podman[158200]: time="2025-12-03T18:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:41:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:41:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Dec  3 18:42:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:01 compute-0 openstack_network_exporter[365222]: ERROR   18:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:42:01 compute-0 openstack_network_exporter[365222]: ERROR   18:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:42:01 compute-0 openstack_network_exporter[365222]: ERROR   18:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:42:01 compute-0 openstack_network_exporter[365222]: ERROR   18:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:42:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:42:01 compute-0 openstack_network_exporter[365222]: ERROR   18:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:42:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:42:01 compute-0 podman[419327]: 2025-12-03 18:42:01.969743668 +0000 UTC m=+0.134403394 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:42:02 compute-0 nova_compute[348325]: 2025-12-03 18:42:02.542 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:02 compute-0 nova_compute[348325]: 2025-12-03 18:42:02.911 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:05 compute-0 podman[419351]: 2025-12-03 18:42:05.935610572 +0000 UTC m=+0.097949594 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true)
Dec  3 18:42:05 compute-0 podman[419350]: 2025-12-03 18:42:05.980870858 +0000 UTC m=+0.135161314 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 18:42:06 compute-0 nova_compute[348325]: 2025-12-03 18:42:06.307 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:42:06 compute-0 nova_compute[348325]: 2025-12-03 18:42:06.308 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:42:06 compute-0 nova_compute[348325]: 2025-12-03 18:42:06.308 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:42:06 compute-0 nova_compute[348325]: 2025-12-03 18:42:06.308 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:42:06 compute-0 nova_compute[348325]: 2025-12-03 18:42:06.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:42:06 compute-0 nova_compute[348325]: 2025-12-03 18:42:06.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:42:06 compute-0 nova_compute[348325]: 2025-12-03 18:42:06.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:42:07 compute-0 nova_compute[348325]: 2025-12-03 18:42:07.022 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:42:07 compute-0 nova_compute[348325]: 2025-12-03 18:42:07.023 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:42:07 compute-0 nova_compute[348325]: 2025-12-03 18:42:07.024 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:42:07 compute-0 nova_compute[348325]: 2025-12-03 18:42:07.025 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ca1fbdb-089c-4544-821e-0542089b8424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:42:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:07 compute-0 nova_compute[348325]: 2025-12-03 18:42:07.547 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:07 compute-0 nova_compute[348325]: 2025-12-03 18:42:07.913 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:08 compute-0 nova_compute[348325]: 2025-12-03 18:42:08.542 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:42:08 compute-0 nova_compute[348325]: 2025-12-03 18:42:08.556 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:42:08 compute-0 nova_compute[348325]: 2025-12-03 18:42:08.557 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:42:08 compute-0 nova_compute[348325]: 2025-12-03 18:42:08.558 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:42:08 compute-0 nova_compute[348325]: 2025-12-03 18:42:08.558 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:42:08 compute-0 nova_compute[348325]: 2025-12-03 18:42:08.558 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:42:08 compute-0 nova_compute[348325]: 2025-12-03 18:42:08.559 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:42:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:12 compute-0 nova_compute[348325]: 2025-12-03 18:42:12.551 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:12 compute-0 nova_compute[348325]: 2025-12-03 18:42:12.917 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:13 compute-0 nova_compute[348325]: 2025-12-03 18:42:13.550 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:42:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:42:13
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups']
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:42:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:42:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:42:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:42:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:42:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:42:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:42:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:42:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:42:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:42:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:42:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:42:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:14.738 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:42:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:14.742 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:42:14 compute-0 nova_compute[348325]: 2025-12-03 18:42:14.747 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:15 compute-0 nova_compute[348325]: 2025-12-03 18:42:15.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:42:15 compute-0 nova_compute[348325]: 2025-12-03 18:42:15.679 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:42:15 compute-0 nova_compute[348325]: 2025-12-03 18:42:15.680 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:42:15 compute-0 nova_compute[348325]: 2025-12-03 18:42:15.680 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:42:15 compute-0 nova_compute[348325]: 2025-12-03 18:42:15.680 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:42:15 compute-0 nova_compute[348325]: 2025-12-03 18:42:15.681 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:42:16 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/780970623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.162 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.343 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.344 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.344 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.350 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.350 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.350 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.695 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.696 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3787MB free_disk=59.922000885009766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.696 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.696 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.798 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.799 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance df72d527-943e-4e8c-b62a-63afa5f18261 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.799 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.800 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.823 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing inventories for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.847 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating ProviderTree inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.847 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.870 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing aggregate associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.894 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing trait associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, traits: COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_AVX2,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 18:42:16 compute-0 nova_compute[348325]: 2025-12-03 18:42:16.955 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:42:17 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/462253077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:42:17 compute-0 nova_compute[348325]: 2025-12-03 18:42:17.444 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:17 compute-0 nova_compute[348325]: 2025-12-03 18:42:17.457 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:42:17 compute-0 nova_compute[348325]: 2025-12-03 18:42:17.476 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:42:17 compute-0 nova_compute[348325]: 2025-12-03 18:42:17.481 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:42:17 compute-0 nova_compute[348325]: 2025-12-03 18:42:17.482 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:42:17 compute-0 nova_compute[348325]: 2025-12-03 18:42:17.552 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:17 compute-0 nova_compute[348325]: 2025-12-03 18:42:17.920 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:19 compute-0 podman[419440]: 2025-12-03 18:42:19.902345005 +0000 UTC m=+0.068353912 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:42:19 compute-0 podman[419439]: 2025-12-03 18:42:19.911042687 +0000 UTC m=+0.079727249 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd)
Dec  3 18:42:19 compute-0 podman[419441]: 2025-12-03 18:42:19.939989074 +0000 UTC m=+0.101375428 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 18:42:20 compute-0 nova_compute[348325]: 2025-12-03 18:42:20.781 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "de3992c5-c1ad-4da3-9276-954d6365c3c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:42:20 compute-0 nova_compute[348325]: 2025-12-03 18:42:20.783 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:42:20 compute-0 nova_compute[348325]: 2025-12-03 18:42:20.806 348329 DEBUG nova.compute.manager [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:42:20 compute-0 nova_compute[348325]: 2025-12-03 18:42:20.884 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:42:20 compute-0 nova_compute[348325]: 2025-12-03 18:42:20.885 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:42:20 compute-0 nova_compute[348325]: 2025-12-03 18:42:20.894 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:42:20 compute-0 nova_compute[348325]: 2025-12-03 18:42:20.895 348329 INFO nova.compute.claims [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.067 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:42:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476119646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.515 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.525 348329 DEBUG nova.compute.provider_tree [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.546 348329 DEBUG nova.scheduler.client.report [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.574 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.575 348329 DEBUG nova.compute.manager [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.629 348329 DEBUG nova.compute.manager [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.630 348329 DEBUG nova.network.neutron [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.653 348329 INFO nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.691 348329 DEBUG nova.compute.manager [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.818 348329 DEBUG nova.compute.manager [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.820 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.821 348329 INFO nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Creating image(s)#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.856 348329 DEBUG nova.storage.rbd_utils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image de3992c5-c1ad-4da3-9276-954d6365c3c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.906 348329 DEBUG nova.storage.rbd_utils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image de3992c5-c1ad-4da3-9276-954d6365c3c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:42:21 compute-0 podman[419522]: 2025-12-03 18:42:21.920182024 +0000 UTC m=+0.093835444 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, version=9.4, architecture=x86_64, container_name=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543)
Dec  3 18:42:21 compute-0 podman[419523]: 2025-12-03 18:42:21.927139174 +0000 UTC m=+0.091533787 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 18:42:21 compute-0 podman[419524]: 2025-12-03 18:42:21.939772393 +0000 UTC m=+0.102179118 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.948 348329 DEBUG nova.storage.rbd_utils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image de3992c5-c1ad-4da3-9276-954d6365c3c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:42:21 compute-0 nova_compute[348325]: 2025-12-03 18:42:21.954 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.011 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.013 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "2a1fd6462a2f789b92c02c5037b663e095546067" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.014 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "2a1fd6462a2f789b92c02c5037b663e095546067" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.014 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "2a1fd6462a2f789b92c02c5037b663e095546067" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.043 348329 DEBUG nova.storage.rbd_utils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image de3992c5-c1ad-4da3-9276-954d6365c3c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.051 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 de3992c5-c1ad-4da3-9276-954d6365c3c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:22 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.378 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 de3992c5-c1ad-4da3-9276-954d6365c3c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.507 348329 DEBUG nova.storage.rbd_utils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] resizing rbd image de3992c5-c1ad-4da3-9276-954d6365c3c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.559 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.661 348329 DEBUG nova.objects.instance [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'migration_context' on Instance uuid de3992c5-c1ad-4da3-9276-954d6365c3c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.704 348329 DEBUG nova.storage.rbd_utils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.736 348329 DEBUG nova.storage.rbd_utils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.743 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.814 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.815 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.816 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.816 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.841 348329 DEBUG nova.storage.rbd_utils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.848 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:22 compute-0 nova_compute[348325]: 2025-12-03 18:42:22.922 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 159 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 751 KiB/s wr, 10 op/s
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.259 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:23.338 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:42:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:23.339 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:42:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:23.340 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.376 348329 DEBUG nova.network.neutron [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Successfully updated port: d2dfa631-e553-46bc-bc20-3f0bdd977328 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.384 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.385 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Ensure instance console log exists: /var/lib/nova/instances/de3992c5-c1ad-4da3-9276-954d6365c3c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.385 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.386 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.386 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.399 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.399 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquired lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.400 348329 DEBUG nova.network.neutron [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.473 348329 DEBUG nova.compute.manager [req-7ea3a9e9-d233-4961-8b2f-1951e17c29ae req-07d191c3-090b-4dbd-92ed-4a9d54ee0536 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Received event network-changed-d2dfa631-e553-46bc-bc20-3f0bdd977328 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.474 348329 DEBUG nova.compute.manager [req-7ea3a9e9-d233-4961-8b2f-1951e17c29ae req-07d191c3-090b-4dbd-92ed-4a9d54ee0536 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Refreshing instance network info cache due to event network-changed-d2dfa631-e553-46bc-bc20-3f0bdd977328. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:42:23 compute-0 nova_compute[348325]: 2025-12-03 18:42:23.481 348329 DEBUG oslo_concurrency.lockutils [req-7ea3a9e9-d233-4961-8b2f-1951e17c29ae req-07d191c3-090b-4dbd-92ed-4a9d54ee0536 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:42:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:24 compute-0 nova_compute[348325]: 2025-12-03 18:42:24.030 348329 DEBUG nova.network.neutron [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012478404595624144 of space, bias 1.0, pg target 0.37435213786872434 quantized to 32 (current 32)
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:42:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:42:24 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:24.746 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:42:24 compute-0 nova_compute[348325]: 2025-12-03 18:42:24.944 348329 DEBUG nova.network.neutron [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Updating instance_info_cache with network_info: [{"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.115 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Releasing lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.116 348329 DEBUG nova.compute.manager [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Instance network_info: |[{"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.117 348329 DEBUG oslo_concurrency.lockutils [req-7ea3a9e9-d233-4961-8b2f-1951e17c29ae req-07d191c3-090b-4dbd-92ed-4a9d54ee0536 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.118 348329 DEBUG nova.network.neutron [req-7ea3a9e9-d233-4961-8b2f-1951e17c29ae req-07d191c3-090b-4dbd-92ed-4a9d54ee0536 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Refreshing network info cache for port d2dfa631-e553-46bc-bc20-3f0bdd977328 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.122 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Start _get_guest_xml network_info=[{"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T18:35:07Z,direct_url=<?>,disk_format='qcow2',id=e68cd467-b4e6-45e0-8e55-984fda402294,min_disk=0,min_ram=0,name='cirros',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T18:35:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.136 348329 WARNING nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.149 348329 DEBUG nova.virt.libvirt.host [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.151 348329 DEBUG nova.virt.libvirt.host [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.157 348329 DEBUG nova.virt.libvirt.host [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.158 348329 DEBUG nova.virt.libvirt.host [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.159 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.160 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:35:14Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='6cb250a4-d28c-4125-888b-653b31e29275',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T18:35:07Z,direct_url=<?>,disk_format='qcow2',id=e68cd467-b4e6-45e0-8e55-984fda402294,min_disk=0,min_ram=0,name='cirros',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T18:35:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.161 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.162 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.162 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.163 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.164 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.164 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.165 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.166 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.166 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.167 348329 DEBUG nova.virt.hardware [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.171 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 159 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 751 KiB/s wr, 23 op/s
Dec  3 18:42:25 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:42:25 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3372647332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.660 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:25 compute-0 nova_compute[348325]: 2025-12-03 18:42:25.662 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:42:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580948232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.110 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.155 348329 DEBUG nova.storage.rbd_utils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.165 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:42:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5945 writes, 26K keys, 5945 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 5945 writes, 5945 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1339 writes, 6061 keys, 1339 commit groups, 1.0 writes per commit group, ingest: 8.74 MB, 0.01 MB/s#012Interval WAL: 1339 writes, 1339 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     99.1      0.31              0.14        15    0.020       0      0       0.0       0.0#012  L6      1/0    7.12 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    103.5     84.0      1.19              0.41        14    0.085     63K   7821       0.0       0.0#012 Sum      1/0    7.12 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     82.3     87.1      1.49              0.55        29    0.051     63K   7821       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    105.5    106.3      0.36              0.18         8    0.045     20K   2556       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    103.5     84.0      1.19              0.41        14    0.085     63K   7821       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    101.4      0.30              0.14        14    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.5 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55911062f1f0#2 capacity: 308.00 MB usage: 12.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000104 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(845,12.45 MB,4.04312%) FilterBlock(30,184.05 KB,0.058355%) IndexBlock(30,343.08 KB,0.108778%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 18:42:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:42:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/936830622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.639 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.643 348329 DEBUG nova.virt.libvirt.vif [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j',id=3,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b322e118-e1cc-40be-8d8c-553648144092'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-lwasdd16',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:42:21Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00NzQxODkyODA1MjIxNzExOTYxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQ3NDE4OTI4MDUyMjE3MTE5NjE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDc0MTg5MjgwNTIyMTcxMTk2MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQ3NDE4OTI4MDUyMjE3MTE5NjE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00NzQxODkyODA1MjIxNzExOTYxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00NzQxODkyODA1MjIxNzExOTYxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  3 18:42:26 compute-0 nova_compute[348325]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDc0MTg5MjgwNTIyMTcxMTk2MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQ3NDE4OTI4MDUyMjE3MTE5NjE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00NzQxODkyODA1MjIxNzExOTYxPT0tLQo=',user_id='56338958b09445f5af9aa9e4601a1a8a',uuid=de3992c5-c1ad-4da3-9276-954d6365c3c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.645 348329 DEBUG nova.network.os_vif_util [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.648 348329 DEBUG nova.network.os_vif_util [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:73:73,bridge_name='br-int',has_traffic_filtering=True,id=d2dfa631-e553-46bc-bc20-3f0bdd977328,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd2dfa631-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.650 348329 DEBUG nova.objects.instance [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'pci_devices' on Instance uuid de3992c5-c1ad-4da3-9276-954d6365c3c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.669 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:42:26 compute-0 nova_compute[348325]:  <uuid>de3992c5-c1ad-4da3-9276-954d6365c3c9</uuid>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  <name>instance-00000003</name>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  <memory>524288</memory>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <nova:name>vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j</nova:name>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:42:25</nova:creationTime>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <nova:flavor name="m1.small">
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <nova:memory>512</nova:memory>
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <nova:ephemeral>1</nova:ephemeral>
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <nova:user uuid="56338958b09445f5af9aa9e4601a1a8a">admin</nova:user>
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <nova:project uuid="d2770200bdb2436c90142fa2e5ddcd47">admin</nova:project>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="e68cd467-b4e6-45e0-8e55-984fda402294"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <nova:port uuid="d2dfa631-e553-46bc-bc20-3f0bdd977328">
Dec  3 18:42:26 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="192.168.0.212" ipVersion="4"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <system>
Dec  3 18:42:26 compute-0 rsyslogd[188590]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 18:42:26.643 348329 DEBUG nova.virt.libvirt.vif [None req-b739f9f3-9f [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <entry name="serial">de3992c5-c1ad-4da3-9276-954d6365c3c9</entry>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <entry name="uuid">de3992c5-c1ad-4da3-9276-954d6365c3c9</entry>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    </system>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  <os>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  </os>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  <features>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  </features>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/de3992c5-c1ad-4da3-9276-954d6365c3c9_disk">
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      </source>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.eph0">
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      </source>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <target dev="vdb" bus="virtio"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.config">
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      </source>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:42:26 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:e6:73:73"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <target dev="tapd2dfa631-e5"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/de3992c5-c1ad-4da3-9276-954d6365c3c9/console.log" append="off"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <video>
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    </video>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:42:26 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:42:26 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:42:26 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:42:26 compute-0 nova_compute[348325]: </domain>
Dec  3 18:42:26 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.687 348329 DEBUG nova.compute.manager [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Preparing to wait for external event network-vif-plugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.688 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.688 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.688 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.689 348329 DEBUG nova.virt.libvirt.vif [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j',id=3,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b322e118-e1cc-40be-8d8c-553648144092'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-lwasdd16',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:42:21Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00NzQxODkyODA1MjIxNzExOTYxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQ3NDE4OTI4MDUyMjE3MTE5NjE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDc0MTg5MjgwNTIyMTcxMTk2MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQ3NDE4OTI4MDUyMjE3MTE5NjE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00NzQxODkyODA1MjIxNzExOTYxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00NzQxODkyODA1MjIxNzExOTYxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.690 348329 DEBUG nova.network.os_vif_util [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.691 348329 DEBUG nova.network.os_vif_util [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:73:73,bridge_name='br-int',has_traffic_filtering=True,id=d2dfa631-e553-46bc-bc20-3f0bdd977328,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd2dfa631-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.691 348329 DEBUG os_vif [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:73:73,bridge_name='br-int',has_traffic_filtering=True,id=d2dfa631-e553-46bc-bc20-3f0bdd977328,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd2dfa631-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.692 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.693 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.694 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.700 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.700 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2dfa631-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.700 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd2dfa631-e5, col_values=(('external_ids', {'iface-id': 'd2dfa631-e553-46bc-bc20-3f0bdd977328', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:73:73', 'vm-uuid': 'de3992c5-c1ad-4da3-9276-954d6365c3c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.702 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:26 compute-0 NetworkManager[49087]: <info>  [1764787346.7033] manager: (tapd2dfa631-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.703 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.714 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.715 348329 INFO os_vif [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:73:73,bridge_name='br-int',has_traffic_filtering=True,id=d2dfa631-e553-46bc-bc20-3f0bdd977328,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd2dfa631-e5')#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.778 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.778 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.778 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.778 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No VIF found with MAC fa:16:3e:e6:73:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.779 348329 INFO nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Using config drive#033[00m
Dec  3 18:42:26 compute-0 nova_compute[348325]: 2025-12-03 18:42:26.807 348329 DEBUG nova.storage.rbd_utils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.074 348329 DEBUG nova.network.neutron [req-7ea3a9e9-d233-4961-8b2f-1951e17c29ae req-07d191c3-090b-4dbd-92ed-4a9d54ee0536 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Updated VIF entry in instance network info cache for port d2dfa631-e553-46bc-bc20-3f0bdd977328. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.075 348329 DEBUG nova.network.neutron [req-7ea3a9e9-d233-4961-8b2f-1951e17c29ae req-07d191c3-090b-4dbd-92ed-4a9d54ee0536 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Updating instance_info_cache with network_info: [{"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.092 348329 DEBUG oslo_concurrency.lockutils [req-7ea3a9e9-d233-4961-8b2f-1951e17c29ae req-07d191c3-090b-4dbd-92ed-4a9d54ee0536 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:42:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 172 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.181 348329 INFO nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Creating config drive at /var/lib/nova/instances/de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.config#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.190 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppkwdnfil execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.340 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppkwdnfil" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.393 348329 DEBUG nova.storage.rbd_utils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.404 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.config de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.554 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.626 348329 DEBUG oslo_concurrency.processutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.config de3992c5-c1ad-4da3-9276-954d6365c3c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.627 348329 INFO nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Deleting local config drive /var/lib/nova/instances/de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.config because it was imported into RBD.#033[00m
Dec  3 18:42:27 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 18:42:27 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 18:42:27 compute-0 NetworkManager[49087]: <info>  [1764787347.7503] manager: (tapd2dfa631-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec  3 18:42:27 compute-0 kernel: tapd2dfa631-e5: entered promiscuous mode
Dec  3 18:42:27 compute-0 ovn_controller[89305]: 2025-12-03T18:42:27Z|00040|binding|INFO|Claiming lport d2dfa631-e553-46bc-bc20-3f0bdd977328 for this chassis.
Dec  3 18:42:27 compute-0 ovn_controller[89305]: 2025-12-03T18:42:27Z|00041|binding|INFO|d2dfa631-e553-46bc-bc20-3f0bdd977328: Claiming fa:16:3e:e6:73:73 192.168.0.212
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.755 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.766 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:73:73 192.168.0.212'], port_security=['fa:16:3e:e6:73:73 192.168.0.212'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-4jfpq66btob3-t73jgstwyk5c-ol75pntdsuyz-port-s2gk2jkwdast', 'neutron:cidrs': '192.168.0.212/24', 'neutron:device_id': 'de3992c5-c1ad-4da3-9276-954d6365c3c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-4jfpq66btob3-t73jgstwyk5c-ol75pntdsuyz-port-s2gk2jkwdast', 'neutron:project_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e48052e-a2fd-4fc1-8ebd-22e3b6e0bd66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12999ead-9a54-49b3-a532-a5f8bdddaf16, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=d2dfa631-e553-46bc-bc20-3f0bdd977328) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.767 286999 INFO neutron.agent.ovn.metadata.agent [-] Port d2dfa631-e553-46bc-bc20-3f0bdd977328 in datapath 85c8d446-ad7f-4d1b-a311-89b0b07e8aad bound to our chassis#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.769 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85c8d446-ad7f-4d1b-a311-89b0b07e8aad#033[00m
Dec  3 18:42:27 compute-0 ovn_controller[89305]: 2025-12-03T18:42:27Z|00042|binding|INFO|Setting lport d2dfa631-e553-46bc-bc20-3f0bdd977328 ovn-installed in OVS
Dec  3 18:42:27 compute-0 ovn_controller[89305]: 2025-12-03T18:42:27Z|00043|binding|INFO|Setting lport d2dfa631-e553-46bc-bc20-3f0bdd977328 up in Southbound
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.785 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.799 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[957111a9-9c16-48fd-88a7-001cd42680b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:42:27 compute-0 systemd-machined[138702]: New machine qemu-3-instance-00000003.
Dec  3 18:42:27 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec  3 18:42:27 compute-0 systemd-udevd[420048]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.834 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[fe533112-4027-4bac-9559-fa043bdb3560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.841 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[58300d7f-3714-41aa-af43-e05c993c7d3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:42:27 compute-0 NetworkManager[49087]: <info>  [1764787347.8471] device (tapd2dfa631-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:42:27 compute-0 NetworkManager[49087]: <info>  [1764787347.8522] device (tapd2dfa631-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.881 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[3315ba74-17c0-4c8e-863a-afdfe054f442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.900 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f3175e54-6e3b-4d58-8f03-9b8fc62e9aa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85c8d446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:c1:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527503, 'reachable_time': 30342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420058, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.917 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[db111e0f-a157-456e-bdc1-c5caa20e9a42]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527519, 'tstamp': 527519}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420059, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527523, 'tstamp': 527523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420059, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.919 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85c8d446-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.921 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:27 compute-0 nova_compute[348325]: 2025-12-03 18:42:27.923 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.924 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85c8d446-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.924 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.925 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85c8d446-a0, col_values=(('external_ids', {'iface-id': '4db8340d-afa3-4a82-bd51-bca0a752f53f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:42:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:42:27.926 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.297 348329 DEBUG nova.compute.manager [req-16246384-0832-4b0f-951f-a001d6c94123 req-bc70dfd3-4ac1-4bca-a4ef-21525f6b03d7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Received event network-vif-plugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.298 348329 DEBUG oslo_concurrency.lockutils [req-16246384-0832-4b0f-951f-a001d6c94123 req-bc70dfd3-4ac1-4bca-a4ef-21525f6b03d7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.298 348329 DEBUG oslo_concurrency.lockutils [req-16246384-0832-4b0f-951f-a001d6c94123 req-bc70dfd3-4ac1-4bca-a4ef-21525f6b03d7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.299 348329 DEBUG oslo_concurrency.lockutils [req-16246384-0832-4b0f-951f-a001d6c94123 req-bc70dfd3-4ac1-4bca-a4ef-21525f6b03d7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.299 348329 DEBUG nova.compute.manager [req-16246384-0832-4b0f-951f-a001d6c94123 req-bc70dfd3-4ac1-4bca-a4ef-21525f6b03d7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Processing event network-vif-plugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.489 348329 DEBUG nova.compute.manager [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.490 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764787348.488724, de3992c5-c1ad-4da3-9276-954d6365c3c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.490 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] VM Started (Lifecycle Event)#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.502 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.506 348329 INFO nova.virt.libvirt.driver [-] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Instance spawned successfully.#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.507 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.535 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.540 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.551 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.552 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.552 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.552 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.553 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.553 348329 DEBUG nova.virt.libvirt.driver [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:42:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.586 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.586 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764787348.48893, de3992c5-c1ad-4da3-9276-954d6365c3c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.586 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] VM Paused (Lifecycle Event)#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.611 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.615 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764787348.5018935, de3992c5-c1ad-4da3-9276-954d6365c3c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.615 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.629 348329 INFO nova.compute.manager [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Took 6.81 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.629 348329 DEBUG nova.compute.manager [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.638 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.644 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.683 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.701 348329 INFO nova.compute.manager [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Took 7.84 seconds to build instance.#033[00m
Dec  3 18:42:28 compute-0 nova_compute[348325]: 2025-12-03 18:42:28.720 348329 DEBUG oslo_concurrency.lockutils [None req-b739f9f3-9f88-42ea-abe2-183a1d4970fc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:42:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  3 18:42:29 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  3 18:42:29 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  3 18:42:29 compute-0 podman[158200]: time="2025-12-03T18:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:42:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:42:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8632 "" "Go-http-client/1.1"
Dec  3 18:42:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Dec  3 18:42:31 compute-0 openstack_network_exporter[365222]: ERROR   18:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:42:31 compute-0 openstack_network_exporter[365222]: ERROR   18:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:42:31 compute-0 openstack_network_exporter[365222]: ERROR   18:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:42:31 compute-0 openstack_network_exporter[365222]: ERROR   18:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:42:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:42:31 compute-0 openstack_network_exporter[365222]: ERROR   18:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:42:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:42:31 compute-0 nova_compute[348325]: 2025-12-03 18:42:31.614 348329 DEBUG nova.compute.manager [req-2557065c-efaf-45a5-bacb-309812f51b0c req-6e139417-ced4-44ea-9ac2-856aed02443c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Received event network-vif-plugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:42:31 compute-0 nova_compute[348325]: 2025-12-03 18:42:31.615 348329 DEBUG oslo_concurrency.lockutils [req-2557065c-efaf-45a5-bacb-309812f51b0c req-6e139417-ced4-44ea-9ac2-856aed02443c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:42:31 compute-0 nova_compute[348325]: 2025-12-03 18:42:31.615 348329 DEBUG oslo_concurrency.lockutils [req-2557065c-efaf-45a5-bacb-309812f51b0c req-6e139417-ced4-44ea-9ac2-856aed02443c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:42:31 compute-0 nova_compute[348325]: 2025-12-03 18:42:31.615 348329 DEBUG oslo_concurrency.lockutils [req-2557065c-efaf-45a5-bacb-309812f51b0c req-6e139417-ced4-44ea-9ac2-856aed02443c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:42:31 compute-0 nova_compute[348325]: 2025-12-03 18:42:31.615 348329 DEBUG nova.compute.manager [req-2557065c-efaf-45a5-bacb-309812f51b0c req-6e139417-ced4-44ea-9ac2-856aed02443c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] No waiting events found dispatching network-vif-plugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:42:31 compute-0 nova_compute[348325]: 2025-12-03 18:42:31.616 348329 WARNING nova.compute.manager [req-2557065c-efaf-45a5-bacb-309812f51b0c req-6e139417-ced4-44ea-9ac2-856aed02443c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Received unexpected event network-vif-plugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 for instance with vm_state active and task_state None.#033[00m
Dec  3 18:42:31 compute-0 nova_compute[348325]: 2025-12-03 18:42:31.705 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:32 compute-0 nova_compute[348325]: 2025-12-03 18:42:32.557 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:32 compute-0 podman[420140]: 2025-12-03 18:42:32.961719909 +0000 UTC m=+0.113580256 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:42:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 97 op/s
Dec  3 18:42:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 659 KiB/s wr, 86 op/s
Dec  3 18:42:36 compute-0 nova_compute[348325]: 2025-12-03 18:42:36.709 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:36 compute-0 podman[420165]: 2025-12-03 18:42:36.937618457 +0000 UTC m=+0.108629315 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 18:42:36 compute-0 podman[420164]: 2025-12-03 18:42:36.975840611 +0000 UTC m=+0.149909644 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec  3 18:42:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 659 KiB/s wr, 73 op/s
Dec  3 18:42:37 compute-0 nova_compute[348325]: 2025-12-03 18:42:37.559 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:42:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/486607520' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:42:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:42:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/486607520' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:42:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 70 op/s
Dec  3 18:42:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec  3 18:42:41 compute-0 nova_compute[348325]: 2025-12-03 18:42:41.715 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:42 compute-0 nova_compute[348325]: 2025-12-03 18:42:42.562 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 55 op/s
Dec  3 18:42:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:42:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:42:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:42:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:42:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:42:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:42:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:46 compute-0 nova_compute[348325]: 2025-12-03 18:42:46.721 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:47 compute-0 nova_compute[348325]: 2025-12-03 18:42:47.564 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:50 compute-0 podman[420208]: 2025-12-03 18:42:50.940523543 +0000 UTC m=+0.096853347 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 18:42:50 compute-0 podman[420207]: 2025-12-03 18:42:50.948864717 +0000 UTC m=+0.105483429 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:42:50 compute-0 podman[420206]: 2025-12-03 18:42:50.964018067 +0000 UTC m=+0.124794031 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:42:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:51 compute-0 nova_compute[348325]: 2025-12-03 18:42:51.727 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:52 compute-0 nova_compute[348325]: 2025-12-03 18:42:52.565 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:52 compute-0 podman[420289]: 2025-12-03 18:42:52.903736657 +0000 UTC m=+0.113185906 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-type=git, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler)
Dec  3 18:42:52 compute-0 podman[420291]: 2025-12-03 18:42:52.918858747 +0000 UTC m=+0.101963082 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:42:52 compute-0 podman[420290]: 2025-12-03 18:42:52.935602406 +0000 UTC m=+0.126587344 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:42:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:42:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:42:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:42:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:42:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:42:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:42:53 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 49ae7704-1213-4880-a783-c537a4dc3b9f does not exist
Dec  3 18:42:53 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 7c926884-5768-45a9-bdec-a8cc95baa184 does not exist
Dec  3 18:42:53 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6f139beb-2514-4636-baaa-ce13a1c1e2fb does not exist
Dec  3 18:42:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:42:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:42:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:42:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:42:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:42:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:42:54 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:42:54 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:42:54 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:42:54 compute-0 podman[420585]: 2025-12-03 18:42:54.565155019 +0000 UTC m=+0.075485985 container create da61e0aa0b0ea090d213cf9223075d9738130bd799c892a4f873d02604746838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:42:54 compute-0 podman[420585]: 2025-12-03 18:42:54.53821432 +0000 UTC m=+0.048545316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:42:54 compute-0 systemd[1]: Started libpod-conmon-da61e0aa0b0ea090d213cf9223075d9738130bd799c892a4f873d02604746838.scope.
Dec  3 18:42:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:42:54 compute-0 podman[420585]: 2025-12-03 18:42:54.698725692 +0000 UTC m=+0.209056678 container init da61e0aa0b0ea090d213cf9223075d9738130bd799c892a4f873d02604746838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:42:54 compute-0 podman[420585]: 2025-12-03 18:42:54.710591112 +0000 UTC m=+0.220922078 container start da61e0aa0b0ea090d213cf9223075d9738130bd799c892a4f873d02604746838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 18:42:54 compute-0 podman[420585]: 2025-12-03 18:42:54.715639215 +0000 UTC m=+0.225970181 container attach da61e0aa0b0ea090d213cf9223075d9738130bd799c892a4f873d02604746838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:42:54 compute-0 epic_boyd[420600]: 167 167
Dec  3 18:42:54 compute-0 systemd[1]: libpod-da61e0aa0b0ea090d213cf9223075d9738130bd799c892a4f873d02604746838.scope: Deactivated successfully.
Dec  3 18:42:54 compute-0 podman[420585]: 2025-12-03 18:42:54.720076394 +0000 UTC m=+0.230407350 container died da61e0aa0b0ea090d213cf9223075d9738130bd799c892a4f873d02604746838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:42:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c4ee584164dad53ac8df7d92ca04aca933868795e954cc3a7e6b6843ddfbb0c-merged.mount: Deactivated successfully.
Dec  3 18:42:54 compute-0 podman[420585]: 2025-12-03 18:42:54.776303987 +0000 UTC m=+0.286634943 container remove da61e0aa0b0ea090d213cf9223075d9738130bd799c892a4f873d02604746838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:42:54 compute-0 systemd[1]: libpod-conmon-da61e0aa0b0ea090d213cf9223075d9738130bd799c892a4f873d02604746838.scope: Deactivated successfully.
Dec  3 18:42:54 compute-0 podman[420623]: 2025-12-03 18:42:54.97860403 +0000 UTC m=+0.054576074 container create 6bad34dd63ce2ff93279066d9d35f4c9eaf53a45cd7729d48c36e6c5d6de295c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:42:55 compute-0 systemd[1]: Started libpod-conmon-6bad34dd63ce2ff93279066d9d35f4c9eaf53a45cd7729d48c36e6c5d6de295c.scope.
Dec  3 18:42:55 compute-0 podman[420623]: 2025-12-03 18:42:54.956110521 +0000 UTC m=+0.032082585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:42:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1be734d56dd3bd5d40f0b5b5a244fdf9f9c5edaf92cb8ed22cac26419ba98d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1be734d56dd3bd5d40f0b5b5a244fdf9f9c5edaf92cb8ed22cac26419ba98d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1be734d56dd3bd5d40f0b5b5a244fdf9f9c5edaf92cb8ed22cac26419ba98d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1be734d56dd3bd5d40f0b5b5a244fdf9f9c5edaf92cb8ed22cac26419ba98d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1be734d56dd3bd5d40f0b5b5a244fdf9f9c5edaf92cb8ed22cac26419ba98d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:42:55 compute-0 podman[420623]: 2025-12-03 18:42:55.102255941 +0000 UTC m=+0.178228005 container init 6bad34dd63ce2ff93279066d9d35f4c9eaf53a45cd7729d48c36e6c5d6de295c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:42:55 compute-0 podman[420623]: 2025-12-03 18:42:55.116853728 +0000 UTC m=+0.192825772 container start 6bad34dd63ce2ff93279066d9d35f4c9eaf53a45cd7729d48c36e6c5d6de295c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:42:55 compute-0 podman[420623]: 2025-12-03 18:42:55.120778263 +0000 UTC m=+0.196750327 container attach 6bad34dd63ce2ff93279066d9d35f4c9eaf53a45cd7729d48c36e6c5d6de295c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:42:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:56 compute-0 fervent_kare[420639]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:42:56 compute-0 fervent_kare[420639]: --> relative data size: 1.0
Dec  3 18:42:56 compute-0 fervent_kare[420639]: --> All data devices are unavailable
Dec  3 18:42:56 compute-0 systemd[1]: libpod-6bad34dd63ce2ff93279066d9d35f4c9eaf53a45cd7729d48c36e6c5d6de295c.scope: Deactivated successfully.
Dec  3 18:42:56 compute-0 systemd[1]: libpod-6bad34dd63ce2ff93279066d9d35f4c9eaf53a45cd7729d48c36e6c5d6de295c.scope: Consumed 1.062s CPU time.
Dec  3 18:42:56 compute-0 podman[420668]: 2025-12-03 18:42:56.310237064 +0000 UTC m=+0.038591794 container died 6bad34dd63ce2ff93279066d9d35f4c9eaf53a45cd7729d48c36e6c5d6de295c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 18:42:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1be734d56dd3bd5d40f0b5b5a244fdf9f9c5edaf92cb8ed22cac26419ba98d7-merged.mount: Deactivated successfully.
Dec  3 18:42:56 compute-0 podman[420668]: 2025-12-03 18:42:56.389104101 +0000 UTC m=+0.117458801 container remove 6bad34dd63ce2ff93279066d9d35f4c9eaf53a45cd7729d48c36e6c5d6de295c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:42:56 compute-0 systemd[1]: libpod-conmon-6bad34dd63ce2ff93279066d9d35f4c9eaf53a45cd7729d48c36e6c5d6de295c.scope: Deactivated successfully.
Dec  3 18:42:56 compute-0 nova_compute[348325]: 2025-12-03 18:42:56.731 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:57 compute-0 podman[420820]: 2025-12-03 18:42:57.316396157 +0000 UTC m=+0.076308146 container create 26ce5e7bac722864c38132da12ad9bfe50de12e9a6d0e634b7878cb412cdc738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:42:57 compute-0 podman[420820]: 2025-12-03 18:42:57.292283377 +0000 UTC m=+0.052195386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:42:57 compute-0 systemd[1]: Started libpod-conmon-26ce5e7bac722864c38132da12ad9bfe50de12e9a6d0e634b7878cb412cdc738.scope.
Dec  3 18:42:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:42:57 compute-0 podman[420820]: 2025-12-03 18:42:57.451737593 +0000 UTC m=+0.211649592 container init 26ce5e7bac722864c38132da12ad9bfe50de12e9a6d0e634b7878cb412cdc738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:42:57 compute-0 podman[420820]: 2025-12-03 18:42:57.463946052 +0000 UTC m=+0.223858041 container start 26ce5e7bac722864c38132da12ad9bfe50de12e9a6d0e634b7878cb412cdc738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:42:57 compute-0 podman[420820]: 2025-12-03 18:42:57.46920253 +0000 UTC m=+0.229114529 container attach 26ce5e7bac722864c38132da12ad9bfe50de12e9a6d0e634b7878cb412cdc738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:42:57 compute-0 sleepy_banach[420836]: 167 167
Dec  3 18:42:57 compute-0 systemd[1]: libpod-26ce5e7bac722864c38132da12ad9bfe50de12e9a6d0e634b7878cb412cdc738.scope: Deactivated successfully.
Dec  3 18:42:57 compute-0 podman[420820]: 2025-12-03 18:42:57.474652734 +0000 UTC m=+0.234564723 container died 26ce5e7bac722864c38132da12ad9bfe50de12e9a6d0e634b7878cb412cdc738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba28f140c8b2856fdb6faf6af33973d3a00f830840dff90bb913121969875e90-merged.mount: Deactivated successfully.
Dec  3 18:42:57 compute-0 podman[420820]: 2025-12-03 18:42:57.540101642 +0000 UTC m=+0.300013641 container remove 26ce5e7bac722864c38132da12ad9bfe50de12e9a6d0e634b7878cb412cdc738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:42:57 compute-0 systemd[1]: libpod-conmon-26ce5e7bac722864c38132da12ad9bfe50de12e9a6d0e634b7878cb412cdc738.scope: Deactivated successfully.
Dec  3 18:42:57 compute-0 nova_compute[348325]: 2025-12-03 18:42:57.567 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:42:57 compute-0 podman[420859]: 2025-12-03 18:42:57.76060108 +0000 UTC m=+0.061389631 container create 3ef3cbb40bada0db4a55412424f976146247609befdbbb5f5f1f7e336b359976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:42:57 compute-0 systemd[1]: Started libpod-conmon-3ef3cbb40bada0db4a55412424f976146247609befdbbb5f5f1f7e336b359976.scope.
Dec  3 18:42:57 compute-0 ovn_controller[89305]: 2025-12-03T18:42:57Z|00044|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Dec  3 18:42:57 compute-0 podman[420859]: 2025-12-03 18:42:57.734606584 +0000 UTC m=+0.035395155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:42:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7d365ac8af0a6e0a1f0944ffffa7e4670836c556a764b4f5a941a056d7b58a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7d365ac8af0a6e0a1f0944ffffa7e4670836c556a764b4f5a941a056d7b58a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7d365ac8af0a6e0a1f0944ffffa7e4670836c556a764b4f5a941a056d7b58a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7d365ac8af0a6e0a1f0944ffffa7e4670836c556a764b4f5a941a056d7b58a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:42:57 compute-0 podman[420859]: 2025-12-03 18:42:57.883147273 +0000 UTC m=+0.183935874 container init 3ef3cbb40bada0db4a55412424f976146247609befdbbb5f5f1f7e336b359976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:42:57 compute-0 podman[420859]: 2025-12-03 18:42:57.896903339 +0000 UTC m=+0.197691890 container start 3ef3cbb40bada0db4a55412424f976146247609befdbbb5f5f1f7e336b359976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 18:42:57 compute-0 podman[420859]: 2025-12-03 18:42:57.901228395 +0000 UTC m=+0.202016976 container attach 3ef3cbb40bada0db4a55412424f976146247609befdbbb5f5f1f7e336b359976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:42:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]: {
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:    "0": [
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:        {
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "devices": [
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "/dev/loop3"
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            ],
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_name": "ceph_lv0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_size": "21470642176",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "name": "ceph_lv0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "tags": {
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.cluster_name": "ceph",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.crush_device_class": "",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.encrypted": "0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.osd_id": "0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.type": "block",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.vdo": "0"
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            },
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "type": "block",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "vg_name": "ceph_vg0"
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:        }
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:    ],
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:    "1": [
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:        {
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "devices": [
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "/dev/loop4"
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            ],
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_name": "ceph_lv1",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_size": "21470642176",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "name": "ceph_lv1",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "tags": {
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.cluster_name": "ceph",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.crush_device_class": "",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.encrypted": "0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.osd_id": "1",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.type": "block",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.vdo": "0"
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            },
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "type": "block",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "vg_name": "ceph_vg1"
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:        }
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:    ],
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:    "2": [
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:        {
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "devices": [
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "/dev/loop5"
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            ],
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_name": "ceph_lv2",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_size": "21470642176",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "name": "ceph_lv2",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "tags": {
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.cluster_name": "ceph",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.crush_device_class": "",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.encrypted": "0",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.osd_id": "2",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.type": "block",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:                "ceph.vdo": "0"
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            },
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "type": "block",
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:            "vg_name": "ceph_vg2"
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:        }
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]:    ]
Dec  3 18:42:58 compute-0 jolly_dubinsky[420875]: }
Dec  3 18:42:58 compute-0 systemd[1]: libpod-3ef3cbb40bada0db4a55412424f976146247609befdbbb5f5f1f7e336b359976.scope: Deactivated successfully.
Dec  3 18:42:58 compute-0 conmon[420875]: conmon 3ef3cbb40bada0db4a55 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ef3cbb40bada0db4a55412424f976146247609befdbbb5f5f1f7e336b359976.scope/container/memory.events
Dec  3 18:42:58 compute-0 podman[420884]: 2025-12-03 18:42:58.807374493 +0000 UTC m=+0.054710817 container died 3ef3cbb40bada0db4a55412424f976146247609befdbbb5f5f1f7e336b359976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7d365ac8af0a6e0a1f0944ffffa7e4670836c556a764b4f5a941a056d7b58a-merged.mount: Deactivated successfully.
Dec  3 18:42:58 compute-0 podman[420884]: 2025-12-03 18:42:58.88992599 +0000 UTC m=+0.137262304 container remove 3ef3cbb40bada0db4a55412424f976146247609befdbbb5f5f1f7e336b359976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:42:58 compute-0 systemd[1]: libpod-conmon-3ef3cbb40bada0db4a55412424f976146247609befdbbb5f5f1f7e336b359976.scope: Deactivated successfully.
Dec  3 18:42:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:42:59 compute-0 podman[158200]: time="2025-12-03T18:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:42:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:42:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8632 "" "Go-http-client/1.1"
Dec  3 18:42:59 compute-0 podman[421037]: 2025-12-03 18:42:59.852544149 +0000 UTC m=+0.065959272 container create 5e93dc770d031dc8c51f9599cde721c7612521034e2b6dbf682577dfea719dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackwell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:42:59 compute-0 systemd[1]: Started libpod-conmon-5e93dc770d031dc8c51f9599cde721c7612521034e2b6dbf682577dfea719dd7.scope.
Dec  3 18:42:59 compute-0 podman[421037]: 2025-12-03 18:42:59.820832874 +0000 UTC m=+0.034248017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:42:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:42:59 compute-0 podman[421037]: 2025-12-03 18:42:59.954743426 +0000 UTC m=+0.168158569 container init 5e93dc770d031dc8c51f9599cde721c7612521034e2b6dbf682577dfea719dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:42:59 compute-0 podman[421037]: 2025-12-03 18:42:59.965756616 +0000 UTC m=+0.179171739 container start 5e93dc770d031dc8c51f9599cde721c7612521034e2b6dbf682577dfea719dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 18:42:59 compute-0 podman[421037]: 2025-12-03 18:42:59.969373333 +0000 UTC m=+0.182788456 container attach 5e93dc770d031dc8c51f9599cde721c7612521034e2b6dbf682577dfea719dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackwell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:42:59 compute-0 beautiful_blackwell[421052]: 167 167
Dec  3 18:42:59 compute-0 systemd[1]: libpod-5e93dc770d031dc8c51f9599cde721c7612521034e2b6dbf682577dfea719dd7.scope: Deactivated successfully.
Dec  3 18:42:59 compute-0 podman[421037]: 2025-12-03 18:42:59.975034862 +0000 UTC m=+0.188449995 container died 5e93dc770d031dc8c51f9599cde721c7612521034e2b6dbf682577dfea719dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 18:43:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-902f256a613a3fb407c21d1ef5b4ab165bcd3e16327241c35c5d159e81cee096-merged.mount: Deactivated successfully.
Dec  3 18:43:00 compute-0 podman[421037]: 2025-12-03 18:43:00.047909252 +0000 UTC m=+0.261324365 container remove 5e93dc770d031dc8c51f9599cde721c7612521034e2b6dbf682577dfea719dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackwell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:43:00 compute-0 systemd[1]: libpod-conmon-5e93dc770d031dc8c51f9599cde721c7612521034e2b6dbf682577dfea719dd7.scope: Deactivated successfully.
Dec  3 18:43:00 compute-0 podman[421076]: 2025-12-03 18:43:00.279909301 +0000 UTC m=+0.062480658 container create 271180ce22b681de403add7e17704a229951c43d705cf31533d3df987283f381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ritchie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:43:00 compute-0 podman[421076]: 2025-12-03 18:43:00.25818003 +0000 UTC m=+0.040751417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:43:00 compute-0 systemd[1]: Started libpod-conmon-271180ce22b681de403add7e17704a229951c43d705cf31533d3df987283f381.scope.
Dec  3 18:43:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7f88cdc9945ea1b5463a268a21af2954d6fa267253d51ceaf7a08920141bab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7f88cdc9945ea1b5463a268a21af2954d6fa267253d51ceaf7a08920141bab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7f88cdc9945ea1b5463a268a21af2954d6fa267253d51ceaf7a08920141bab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7f88cdc9945ea1b5463a268a21af2954d6fa267253d51ceaf7a08920141bab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:43:00 compute-0 podman[421076]: 2025-12-03 18:43:00.431688519 +0000 UTC m=+0.214259876 container init 271180ce22b681de403add7e17704a229951c43d705cf31533d3df987283f381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ritchie, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:43:00 compute-0 podman[421076]: 2025-12-03 18:43:00.442439681 +0000 UTC m=+0.225011078 container start 271180ce22b681de403add7e17704a229951c43d705cf31533d3df987283f381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Dec  3 18:43:00 compute-0 podman[421076]: 2025-12-03 18:43:00.449198827 +0000 UTC m=+0.231770214 container attach 271180ce22b681de403add7e17704a229951c43d705cf31533d3df987283f381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:43:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:43:01 compute-0 openstack_network_exporter[365222]: ERROR   18:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:43:01 compute-0 openstack_network_exporter[365222]: ERROR   18:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:43:01 compute-0 openstack_network_exporter[365222]: ERROR   18:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:43:01 compute-0 openstack_network_exporter[365222]: ERROR   18:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:43:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:43:01 compute-0 openstack_network_exporter[365222]: ERROR   18:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:43:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]: {
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "osd_id": 1,
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "type": "bluestore"
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:    },
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "osd_id": 2,
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "type": "bluestore"
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:    },
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "osd_id": 0,
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:        "type": "bluestore"
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]:    }
Dec  3 18:43:01 compute-0 nostalgic_ritchie[421092]: }
Dec  3 18:43:01 compute-0 systemd[1]: libpod-271180ce22b681de403add7e17704a229951c43d705cf31533d3df987283f381.scope: Deactivated successfully.
Dec  3 18:43:01 compute-0 systemd[1]: libpod-271180ce22b681de403add7e17704a229951c43d705cf31533d3df987283f381.scope: Consumed 1.127s CPU time.
Dec  3 18:43:01 compute-0 podman[421076]: 2025-12-03 18:43:01.602160185 +0000 UTC m=+1.384731552 container died 271180ce22b681de403add7e17704a229951c43d705cf31533d3df987283f381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb7f88cdc9945ea1b5463a268a21af2954d6fa267253d51ceaf7a08920141bab-merged.mount: Deactivated successfully.
Dec  3 18:43:01 compute-0 podman[421076]: 2025-12-03 18:43:01.677009304 +0000 UTC m=+1.459580661 container remove 271180ce22b681de403add7e17704a229951c43d705cf31533d3df987283f381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ritchie, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:43:01 compute-0 systemd[1]: libpod-conmon-271180ce22b681de403add7e17704a229951c43d705cf31533d3df987283f381.scope: Deactivated successfully.
Dec  3 18:43:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:43:01 compute-0 nova_compute[348325]: 2025-12-03 18:43:01.736 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:43:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:43:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:43:01 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3e0e3991-436f-48bd-9caf-b3894ce4aa6e does not exist
Dec  3 18:43:01 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 1c89ec4b-d088-4286-80ea-36db1220de07 does not exist
Dec  3 18:43:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:43:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:43:02 compute-0 nova_compute[348325]: 2025-12-03 18:43:02.570 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 172 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:43:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:03 compute-0 podman[421186]: 2025-12-03 18:43:03.922944917 +0000 UTC m=+0.088171296 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:43:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 184 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 697 KiB/s wr, 2 op/s
Dec  3 18:43:05 compute-0 ovn_controller[89305]: 2025-12-03T18:43:05Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:73:73 192.168.0.212
Dec  3 18:43:05 compute-0 ovn_controller[89305]: 2025-12-03T18:43:05Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:73:73 192.168.0.212
Dec  3 18:43:06 compute-0 nova_compute[348325]: 2025-12-03 18:43:06.476 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:43:06 compute-0 nova_compute[348325]: 2025-12-03 18:43:06.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:43:06 compute-0 nova_compute[348325]: 2025-12-03 18:43:06.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:43:06 compute-0 nova_compute[348325]: 2025-12-03 18:43:06.740 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:07 compute-0 nova_compute[348325]: 2025-12-03 18:43:07.077 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:43:07 compute-0 nova_compute[348325]: 2025-12-03 18:43:07.077 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:43:07 compute-0 nova_compute[348325]: 2025-12-03 18:43:07.078 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:43:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 188 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 760 KiB/s wr, 30 op/s
Dec  3 18:43:07 compute-0 nova_compute[348325]: 2025-12-03 18:43:07.573 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:07 compute-0 podman[421209]: 2025-12-03 18:43:07.957048497 +0000 UTC m=+0.107167109 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 18:43:07 compute-0 podman[421208]: 2025-12-03 18:43:07.99032181 +0000 UTC m=+0.151037511 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  3 18:43:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:08 compute-0 nova_compute[348325]: 2025-12-03 18:43:08.738 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updating instance_info_cache with network_info: [{"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:43:08 compute-0 nova_compute[348325]: 2025-12-03 18:43:08.760 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:43:08 compute-0 nova_compute[348325]: 2025-12-03 18:43:08.761 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:43:08 compute-0 nova_compute[348325]: 2025-12-03 18:43:08.761 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:43:08 compute-0 nova_compute[348325]: 2025-12-03 18:43:08.762 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:43:08 compute-0 nova_compute[348325]: 2025-12-03 18:43:08.762 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:43:08 compute-0 nova_compute[348325]: 2025-12-03 18:43:08.763 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:43:08 compute-0 nova_compute[348325]: 2025-12-03 18:43:08.763 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:43:08 compute-0 nova_compute[348325]: 2025-12-03 18:43:08.764 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:43:08 compute-0 nova_compute[348325]: 2025-12-03 18:43:08.765 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:43:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 196 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Dec  3 18:43:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 199 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 1.5 MiB/s wr, 51 op/s
Dec  3 18:43:11 compute-0 nova_compute[348325]: 2025-12-03 18:43:11.744 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:12 compute-0 nova_compute[348325]: 2025-12-03 18:43:12.575 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.248 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.249 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.249 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.250 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c352ed0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.255 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1ca1fbdb-089c-4544-821e-0542089b8424', 'name': 'test_0', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.258 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'df72d527-943e-4e8c-b62a-63afa5f18261', 'name': 'vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.261 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance de3992c5-c1ad-4da3-9276-954d6365c3c9 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 18:43:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:13.262 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/de3992c5-c1ad-4da3-9276-954d6365c3c9 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 18:43:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:43:13
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['images', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'backups', 'volumes', 'vms']
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:43:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.242 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 03 Dec 2025 18:43:13 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-66d5274e-4210-4fb6-92c7-a2f78082b18e x-openstack-request-id: req-66d5274e-4210-4fb6-92c7-a2f78082b18e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.242 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "de3992c5-c1ad-4da3-9276-954d6365c3c9", "name": "vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j", "status": "ACTIVE", "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "user_id": "56338958b09445f5af9aa9e4601a1a8a", "metadata": {"metering.server_group": "b322e118-e1cc-40be-8d8c-553648144092"}, "hostId": "233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878", "image": {"id": "e68cd467-b4e6-45e0-8e55-984fda402294", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e68cd467-b4e6-45e0-8e55-984fda402294"}]}, "flavor": {"id": "6cb250a4-d28c-4125-888b-653b31e29275", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/6cb250a4-d28c-4125-888b-653b31e29275"}]}, "created": "2025-12-03T18:42:19Z", "updated": "2025-12-03T18:42:28Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.212", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e6:73:73"}, {"version": 4, "addr": "192.168.122.241", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e6:73:73"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/de3992c5-c1ad-4da3-9276-954d6365c3c9"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/de3992c5-c1ad-4da3-9276-954d6365c3c9"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T18:42:28.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.242 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/de3992c5-c1ad-4da3-9276-954d6365c3c9 used request id req-66d5274e-4210-4fb6-92c7-a2f78082b18e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.244 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'de3992c5-c1ad-4da3-9276-954d6365c3c9', 'name': 'vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.245 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.245 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.245 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.245 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T18:43:14.245368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.251 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.257 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.265 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for de3992c5-c1ad-4da3-9276-954d6365c3c9 / tapd2dfa631-e5 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.265 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.267 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.267 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.267 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.267 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.268 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.bytes volume: 4722 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.268 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.bytes volume: 1666 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T18:43:14.267774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.270 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.270 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.270 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.270 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.271 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.271 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.272 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T18:43:14.270364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.273 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.273 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.273 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.274 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T18:43:14.272915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T18:43:14.275095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.275 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.275 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j>]
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.276 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.276 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.276 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.276 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.277 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T18:43:14.276765) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.277 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes volume: 2094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.277 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.277 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.278 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.278 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.278 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.278 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.278 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T18:43:14.278880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.319 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.320 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.320 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:43:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:43:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:43:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:43:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:43:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:43:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:43:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:43:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:43:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.360 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.360 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.361 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.387 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.388 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.388 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.390 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.390 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.390 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.390 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.391 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.391 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.392 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.392 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.393 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T18:43:14.390905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T18:43:14.394339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.419 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/memory.usage volume: 48.94921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.439 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/memory.usage volume: 49.08984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.461 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/memory.usage volume: 49.734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.462 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.463 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.463 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.463 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.463 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.463 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T18:43:14.463420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.464 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.464 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.465 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.465 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.466 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.467 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.467 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T18:43:14.466172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.469 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.469 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.469 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.470 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.470 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.471 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.472 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.472 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.472 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.473 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T18:43:14.472795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.538 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.539 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.539 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.609 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.610 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.610 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.710 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.711 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.711 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.712 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.713 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.713 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.713 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.713 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j>]
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.714 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.714 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 1682579508 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.714 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 260360075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.715 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 147233249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.715 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 1698039964 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.715 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 224294548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.716 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 159520694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.716 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 1270610173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.716 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 182054323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.717 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 131449970 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.718 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T18:43:14.713213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.719 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T18:43:14.714349) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.719 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.719 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T18:43:14.718896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.720 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.720 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.720 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.720 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.721 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.721 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.722 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.722 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.722 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.722 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.723 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.723 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.723 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.724 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.724 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.724 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.725 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.725 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.725 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T18:43:14.722925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.727 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T18:43:14.727312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.727 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.727 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.728 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.728 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.728 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.729 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.729 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 41689088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.729 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.730 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.731 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.731 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.732 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.732 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T18:43:14.731782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.733 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.734 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 6303799002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.734 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 23959545 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.734 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.735 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 9999121595 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.735 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 29522381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.735 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.736 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 6408143893 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.736 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 21269890 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.736 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.737 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.738 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.738 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.738 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.739 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.739 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.739 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.740 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 218 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.740 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.740 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.741 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.741 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.741 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.741 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.741 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.742 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T18:43:14.733836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.742 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T18:43:14.738133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T18:43:14.742077) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.742 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.743 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.743 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.743 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.744 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.744 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.745 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.745 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.745 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.745 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T18:43:14.744089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.746 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.746 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.746 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.747 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.747 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.747 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.747 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.747 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.748 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.748 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.749 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.749 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.749 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.749 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.749 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.749 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.750 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.750 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.750 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.750 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.750 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.750 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.750 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T18:43:14.746054) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.751 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/cpu volume: 39640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.751 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/cpu volume: 284550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.751 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/cpu volume: 35870000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.751 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T18:43:14.748074) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T18:43:14.749569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T18:43:14.751075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:43:14.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:43:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  3 18:43:16 compute-0 nova_compute[348325]: 2025-12-03 18:43:16.749 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 165 KiB/s rd, 822 KiB/s wr, 54 op/s
Dec  3 18:43:17 compute-0 nova_compute[348325]: 2025-12-03 18:43:17.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:43:17 compute-0 nova_compute[348325]: 2025-12-03 18:43:17.522 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:43:17 compute-0 nova_compute[348325]: 2025-12-03 18:43:17.523 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:43:17 compute-0 nova_compute[348325]: 2025-12-03 18:43:17.524 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:43:17 compute-0 nova_compute[348325]: 2025-12-03 18:43:17.525 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:43:17 compute-0 nova_compute[348325]: 2025-12-03 18:43:17.526 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:43:17 compute-0 nova_compute[348325]: 2025-12-03 18:43:17.579 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:43:17 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508323907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.021 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.139 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.140 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.140 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.144 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.145 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.145 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.150 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.150 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.151 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:43:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.605 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.607 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3529MB free_disk=59.88883972167969GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.608 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.608 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.737 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.738 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance df72d527-943e-4e8c-b62a-63afa5f18261 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.738 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance de3992c5-c1ad-4da3-9276-954d6365c3c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.739 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.740 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:43:18 compute-0 nova_compute[348325]: 2025-12-03 18:43:18.831 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:43:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 759 KiB/s wr, 27 op/s
Dec  3 18:43:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:43:19 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3611194094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:43:19 compute-0 nova_compute[348325]: 2025-12-03 18:43:19.261 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:43:19 compute-0 nova_compute[348325]: 2025-12-03 18:43:19.273 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:43:19 compute-0 nova_compute[348325]: 2025-12-03 18:43:19.301 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:43:19 compute-0 nova_compute[348325]: 2025-12-03 18:43:19.339 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:43:19 compute-0 nova_compute[348325]: 2025-12-03 18:43:19.340 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:43:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 21 KiB/s wr, 6 op/s
Dec  3 18:43:21 compute-0 nova_compute[348325]: 2025-12-03 18:43:21.755 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:21 compute-0 podman[421297]: 2025-12-03 18:43:21.963766246 +0000 UTC m=+0.124552534 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:43:21 compute-0 podman[421303]: 2025-12-03 18:43:21.97002897 +0000 UTC m=+0.115206816 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git)
Dec  3 18:43:21 compute-0 podman[421298]: 2025-12-03 18:43:21.973093015 +0000 UTC m=+0.113839653 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:43:22 compute-0 nova_compute[348325]: 2025-12-03 18:43:22.582 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 10 KiB/s wr, 5 op/s
Dec  3 18:43:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:43:23.339 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:43:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:43:23.340 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:43:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:43:23.340 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:43:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:23 compute-0 podman[421362]: 2025-12-03 18:43:23.899185032 +0000 UTC m=+0.061390881 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:43:23 compute-0 podman[421360]: 2025-12-03 18:43:23.912817685 +0000 UTC m=+0.081451621 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Dec  3 18:43:23 compute-0 podman[421361]: 2025-12-03 18:43:23.930495697 +0000 UTC m=+0.099017840 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016576825714657811 of space, bias 1.0, pg target 0.49730477143973434 quantized to 32 (current 32)
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:43:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:43:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 18:43:26 compute-0 nova_compute[348325]: 2025-12-03 18:43:26.759 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:43:27 compute-0 nova_compute[348325]: 2025-12-03 18:43:27.584 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:43:29 compute-0 podman[158200]: time="2025-12-03T18:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:43:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:43:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec  3 18:43:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:43:31 compute-0 openstack_network_exporter[365222]: ERROR   18:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:43:31 compute-0 openstack_network_exporter[365222]: ERROR   18:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:43:31 compute-0 openstack_network_exporter[365222]: ERROR   18:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:43:31 compute-0 openstack_network_exporter[365222]: ERROR   18:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:43:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:43:31 compute-0 openstack_network_exporter[365222]: ERROR   18:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:43:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:43:31 compute-0 nova_compute[348325]: 2025-12-03 18:43:31.762 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:32 compute-0 nova_compute[348325]: 2025-12-03 18:43:32.585 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:43:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:34 compute-0 podman[421415]: 2025-12-03 18:43:34.913035647 +0000 UTC m=+0.085552345 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:43:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:43:36 compute-0 nova_compute[348325]: 2025-12-03 18:43:36.765 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:43:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:43:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/414314122' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:43:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:43:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/414314122' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:43:37 compute-0 nova_compute[348325]: 2025-12-03 18:43:37.590 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:39 compute-0 podman[421439]: 2025-12-03 18:43:39.180929134 +0000 UTC m=+0.339715813 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  3 18:43:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 18:43:39 compute-0 podman[421438]: 2025-12-03 18:43:39.249679976 +0000 UTC m=+0.418715656 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  3 18:43:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Dec  3 18:43:41 compute-0 nova_compute[348325]: 2025-12-03 18:43:41.770 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:42 compute-0 nova_compute[348325]: 2025-12-03 18:43:42.594 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec  3 18:43:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:43:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:43:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:43:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:43:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:43:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:43:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec  3 18:43:46 compute-0 nova_compute[348325]: 2025-12-03 18:43:46.775 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec  3 18:43:47 compute-0 nova_compute[348325]: 2025-12-03 18:43:47.596 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec  3 18:43:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 5.4 KiB/s wr, 0 op/s
Dec  3 18:43:51 compute-0 nova_compute[348325]: 2025-12-03 18:43:51.778 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:52 compute-0 nova_compute[348325]: 2025-12-03 18:43:52.599 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:52 compute-0 podman[421485]: 2025-12-03 18:43:52.94272991 +0000 UTC m=+0.093861738 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:43:52 compute-0 podman[421484]: 2025-12-03 18:43:52.96034681 +0000 UTC m=+0.111619142 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 18:43:52 compute-0 podman[421486]: 2025-12-03 18:43:52.969466114 +0000 UTC m=+0.118768447 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6)
Dec  3 18:43:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec  3 18:43:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:54 compute-0 podman[421543]: 2025-12-03 18:43:54.930890667 +0000 UTC m=+0.097560679 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, architecture=x86_64, version=9.4, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, container_name=kepler, distribution-scope=public, com.redhat.component=ubi9-container, config_id=edpm, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 18:43:54 compute-0 podman[421544]: 2025-12-03 18:43:54.978641635 +0000 UTC m=+0.123081313 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2)
Dec  3 18:43:55 compute-0 podman[421545]: 2025-12-03 18:43:55.004051756 +0000 UTC m=+0.148126095 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:43:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.286821) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787435287067, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2048, "num_deletes": 251, "total_data_size": 3451657, "memory_usage": 3502808, "flush_reason": "Manual Compaction"}
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787435316090, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3396814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25552, "largest_seqno": 27599, "table_properties": {"data_size": 3387351, "index_size": 6021, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18619, "raw_average_key_size": 20, "raw_value_size": 3368717, "raw_average_value_size": 3633, "num_data_blocks": 267, "num_entries": 927, "num_filter_entries": 927, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764787203, "oldest_key_time": 1764787203, "file_creation_time": 1764787435, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 29327 microseconds, and 15808 cpu microseconds.
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.316188) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3396814 bytes OK
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.316222) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.320024) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.320048) EVENT_LOG_v1 {"time_micros": 1764787435320041, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.320080) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3443094, prev total WAL file size 3443094, number of live WAL files 2.
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.322003) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3317KB)], [59(7294KB)]
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787435322092, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10866054, "oldest_snapshot_seqno": -1}
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5041 keys, 9107268 bytes, temperature: kUnknown
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787435375494, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9107268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9071750, "index_size": 21840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125167, "raw_average_key_size": 24, "raw_value_size": 8978699, "raw_average_value_size": 1781, "num_data_blocks": 906, "num_entries": 5041, "num_filter_entries": 5041, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764787435, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.376237) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9107268 bytes
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.378307) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.1 rd, 170.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5555, records dropped: 514 output_compression: NoCompression
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.378327) EVENT_LOG_v1 {"time_micros": 1764787435378317, "job": 32, "event": "compaction_finished", "compaction_time_micros": 53495, "compaction_time_cpu_micros": 26900, "output_level": 6, "num_output_files": 1, "total_output_size": 9107268, "num_input_records": 5555, "num_output_records": 5041, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787435379390, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787435381130, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.321523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.381280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.381288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.381290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.381292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:43:55 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:43:55.381294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:43:56 compute-0 nova_compute[348325]: 2025-12-03 18:43:56.783 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:43:57 compute-0 nova_compute[348325]: 2025-12-03 18:43:57.602 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:43:58 compute-0 nova_compute[348325]: 2025-12-03 18:43:58.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:43:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:43:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:43:59 compute-0 podman[158200]: time="2025-12-03T18:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:43:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:43:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8636 "" "Go-http-client/1.1"
Dec  3 18:44:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:01 compute-0 openstack_network_exporter[365222]: ERROR   18:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:44:01 compute-0 openstack_network_exporter[365222]: ERROR   18:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:44:01 compute-0 openstack_network_exporter[365222]: ERROR   18:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:44:01 compute-0 openstack_network_exporter[365222]: ERROR   18:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:44:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:44:01 compute-0 openstack_network_exporter[365222]: ERROR   18:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:44:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:44:01 compute-0 nova_compute[348325]: 2025-12-03 18:44:01.786 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:02 compute-0 nova_compute[348325]: 2025-12-03 18:44:02.604 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:44:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:44:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:44:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:44:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:44:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:44:03 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f6cd96cf-d43f-4144-ae25-6336f833cb0a does not exist
Dec  3 18:44:03 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9074d092-af7b-44db-ad52-8d2e98f833da does not exist
Dec  3 18:44:03 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3f8ec0f7-6b88-4837-a2f2-9e820d01b124 does not exist
Dec  3 18:44:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:44:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:44:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:44:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:44:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:44:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:44:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:44:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:44:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:44:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:03 compute-0 podman[421870]: 2025-12-03 18:44:03.942615077 +0000 UTC m=+0.083760451 container create 77d1613dc68200da314ccfcb09749cfea28ba1aaff14529d839695754c8a37c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:44:03 compute-0 podman[421870]: 2025-12-03 18:44:03.905353214 +0000 UTC m=+0.046498668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:44:04 compute-0 systemd[1]: Started libpod-conmon-77d1613dc68200da314ccfcb09749cfea28ba1aaff14529d839695754c8a37c2.scope.
Dec  3 18:44:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:44:04 compute-0 podman[421870]: 2025-12-03 18:44:04.064599881 +0000 UTC m=+0.205745275 container init 77d1613dc68200da314ccfcb09749cfea28ba1aaff14529d839695754c8a37c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:44:04 compute-0 podman[421870]: 2025-12-03 18:44:04.076877512 +0000 UTC m=+0.218022886 container start 77d1613dc68200da314ccfcb09749cfea28ba1aaff14529d839695754c8a37c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:44:04 compute-0 podman[421870]: 2025-12-03 18:44:04.08129242 +0000 UTC m=+0.222437794 container attach 77d1613dc68200da314ccfcb09749cfea28ba1aaff14529d839695754c8a37c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:44:04 compute-0 angry_khayyam[421886]: 167 167
Dec  3 18:44:04 compute-0 systemd[1]: libpod-77d1613dc68200da314ccfcb09749cfea28ba1aaff14529d839695754c8a37c2.scope: Deactivated successfully.
Dec  3 18:44:04 compute-0 conmon[421886]: conmon 77d1613dc68200da314c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77d1613dc68200da314ccfcb09749cfea28ba1aaff14529d839695754c8a37c2.scope/container/memory.events
Dec  3 18:44:04 compute-0 podman[421870]: 2025-12-03 18:44:04.08824871 +0000 UTC m=+0.229394094 container died 77d1613dc68200da314ccfcb09749cfea28ba1aaff14529d839695754c8a37c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:44:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-59acf8f0e80238a20bf748501811b64281338066d12899de2143644e277a9249-merged.mount: Deactivated successfully.
Dec  3 18:44:04 compute-0 podman[421870]: 2025-12-03 18:44:04.144617959 +0000 UTC m=+0.285763333 container remove 77d1613dc68200da314ccfcb09749cfea28ba1aaff14529d839695754c8a37c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 18:44:04 compute-0 systemd[1]: libpod-conmon-77d1613dc68200da314ccfcb09749cfea28ba1aaff14529d839695754c8a37c2.scope: Deactivated successfully.
Dec  3 18:44:04 compute-0 podman[421909]: 2025-12-03 18:44:04.41479317 +0000 UTC m=+0.072513306 container create 44abab875850778d5f9cf7e37fd82054f674c8903a2bc12d9e4ae2feec1c2b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 18:44:04 compute-0 podman[421909]: 2025-12-03 18:44:04.393032888 +0000 UTC m=+0.050753064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:44:04 compute-0 systemd[1]: Started libpod-conmon-44abab875850778d5f9cf7e37fd82054f674c8903a2bc12d9e4ae2feec1c2b78.scope.
Dec  3 18:44:04 compute-0 nova_compute[348325]: 2025-12-03 18:44:04.495 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:04 compute-0 nova_compute[348325]: 2025-12-03 18:44:04.495 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:04 compute-0 nova_compute[348325]: 2025-12-03 18:44:04.495 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 18:44:04 compute-0 nova_compute[348325]: 2025-12-03 18:44:04.516 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 18:44:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/519ba1753032aa0029f915268c17d3b21c9ff5e2a699c145db958422e5ac44b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/519ba1753032aa0029f915268c17d3b21c9ff5e2a699c145db958422e5ac44b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/519ba1753032aa0029f915268c17d3b21c9ff5e2a699c145db958422e5ac44b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/519ba1753032aa0029f915268c17d3b21c9ff5e2a699c145db958422e5ac44b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/519ba1753032aa0029f915268c17d3b21c9ff5e2a699c145db958422e5ac44b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:04 compute-0 podman[421909]: 2025-12-03 18:44:04.554602471 +0000 UTC m=+0.212322617 container init 44abab875850778d5f9cf7e37fd82054f674c8903a2bc12d9e4ae2feec1c2b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:44:04 compute-0 podman[421909]: 2025-12-03 18:44:04.582115974 +0000 UTC m=+0.239836140 container start 44abab875850778d5f9cf7e37fd82054f674c8903a2bc12d9e4ae2feec1c2b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_maxwell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:44:04 compute-0 podman[421909]: 2025-12-03 18:44:04.589122705 +0000 UTC m=+0.246842851 container attach 44abab875850778d5f9cf7e37fd82054f674c8903a2bc12d9e4ae2feec1c2b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_maxwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:44:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:05 compute-0 priceless_maxwell[421925]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:44:05 compute-0 priceless_maxwell[421925]: --> relative data size: 1.0
Dec  3 18:44:05 compute-0 priceless_maxwell[421925]: --> All data devices are unavailable
Dec  3 18:44:05 compute-0 systemd[1]: libpod-44abab875850778d5f9cf7e37fd82054f674c8903a2bc12d9e4ae2feec1c2b78.scope: Deactivated successfully.
Dec  3 18:44:05 compute-0 systemd[1]: libpod-44abab875850778d5f9cf7e37fd82054f674c8903a2bc12d9e4ae2feec1c2b78.scope: Consumed 1.211s CPU time.
Dec  3 18:44:05 compute-0 podman[421960]: 2025-12-03 18:44:05.93351136 +0000 UTC m=+0.053901530 container died 44abab875850778d5f9cf7e37fd82054f674c8903a2bc12d9e4ae2feec1c2b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_maxwell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:44:05 compute-0 podman[421954]: 2025-12-03 18:44:05.95272454 +0000 UTC m=+0.099633979 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-519ba1753032aa0029f915268c17d3b21c9ff5e2a699c145db958422e5ac44b8-merged.mount: Deactivated successfully.
Dec  3 18:44:06 compute-0 podman[421960]: 2025-12-03 18:44:06.042852815 +0000 UTC m=+0.163242945 container remove 44abab875850778d5f9cf7e37fd82054f674c8903a2bc12d9e4ae2feec1c2b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:44:06 compute-0 systemd[1]: libpod-conmon-44abab875850778d5f9cf7e37fd82054f674c8903a2bc12d9e4ae2feec1c2b78.scope: Deactivated successfully.
Dec  3 18:44:06 compute-0 nova_compute[348325]: 2025-12-03 18:44:06.508 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:06 compute-0 nova_compute[348325]: 2025-12-03 18:44:06.791 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:06 compute-0 podman[422130]: 2025-12-03 18:44:06.978360735 +0000 UTC m=+0.058486412 container create 72ac88a546f5e09b2c04b529c1ac37891aa73b6fba6c0eb4978d170782076e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:44:07 compute-0 systemd[1]: Started libpod-conmon-72ac88a546f5e09b2c04b529c1ac37891aa73b6fba6c0eb4978d170782076e33.scope.
Dec  3 18:44:07 compute-0 podman[422130]: 2025-12-03 18:44:06.95651949 +0000 UTC m=+0.036645177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:44:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:44:07 compute-0 podman[422130]: 2025-12-03 18:44:07.093560684 +0000 UTC m=+0.173686371 container init 72ac88a546f5e09b2c04b529c1ac37891aa73b6fba6c0eb4978d170782076e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 18:44:07 compute-0 podman[422130]: 2025-12-03 18:44:07.10523546 +0000 UTC m=+0.185361127 container start 72ac88a546f5e09b2c04b529c1ac37891aa73b6fba6c0eb4978d170782076e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:44:07 compute-0 podman[422130]: 2025-12-03 18:44:07.11015781 +0000 UTC m=+0.190283507 container attach 72ac88a546f5e09b2c04b529c1ac37891aa73b6fba6c0eb4978d170782076e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 18:44:07 compute-0 zealous_edison[422146]: 167 167
Dec  3 18:44:07 compute-0 systemd[1]: libpod-72ac88a546f5e09b2c04b529c1ac37891aa73b6fba6c0eb4978d170782076e33.scope: Deactivated successfully.
Dec  3 18:44:07 compute-0 podman[422130]: 2025-12-03 18:44:07.118689269 +0000 UTC m=+0.198814966 container died 72ac88a546f5e09b2c04b529c1ac37891aa73b6fba6c0eb4978d170782076e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-da59841f06dc4174a0d3868adc25654c14d2ec9f109ce9a105ca7a244faade0d-merged.mount: Deactivated successfully.
Dec  3 18:44:07 compute-0 podman[422130]: 2025-12-03 18:44:07.174126385 +0000 UTC m=+0.254252052 container remove 72ac88a546f5e09b2c04b529c1ac37891aa73b6fba6c0eb4978d170782076e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:44:07 compute-0 systemd[1]: libpod-conmon-72ac88a546f5e09b2c04b529c1ac37891aa73b6fba6c0eb4978d170782076e33.scope: Deactivated successfully.
Dec  3 18:44:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:07 compute-0 podman[422170]: 2025-12-03 18:44:07.406005529 +0000 UTC m=+0.058437271 container create 918481f324575ff7fdac51f043844fdc86229c71b3111dc875e434a0d5ee5951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:44:07 compute-0 systemd[1]: Started libpod-conmon-918481f324575ff7fdac51f043844fdc86229c71b3111dc875e434a0d5ee5951.scope.
Dec  3 18:44:07 compute-0 podman[422170]: 2025-12-03 18:44:07.382872053 +0000 UTC m=+0.035303825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:44:07 compute-0 nova_compute[348325]: 2025-12-03 18:44:07.489 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:07 compute-0 nova_compute[348325]: 2025-12-03 18:44:07.491 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6846537b26a01fdacf2f9277d4bbaa969c7b05375b0f1c1dd884f288135da37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6846537b26a01fdacf2f9277d4bbaa969c7b05375b0f1c1dd884f288135da37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6846537b26a01fdacf2f9277d4bbaa969c7b05375b0f1c1dd884f288135da37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6846537b26a01fdacf2f9277d4bbaa969c7b05375b0f1c1dd884f288135da37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:07 compute-0 podman[422170]: 2025-12-03 18:44:07.549356917 +0000 UTC m=+0.201788679 container init 918481f324575ff7fdac51f043844fdc86229c71b3111dc875e434a0d5ee5951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:44:07 compute-0 podman[422170]: 2025-12-03 18:44:07.568330411 +0000 UTC m=+0.220762153 container start 918481f324575ff7fdac51f043844fdc86229c71b3111dc875e434a0d5ee5951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:44:07 compute-0 podman[422170]: 2025-12-03 18:44:07.572284178 +0000 UTC m=+0.224715920 container attach 918481f324575ff7fdac51f043844fdc86229c71b3111dc875e434a0d5ee5951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 18:44:07 compute-0 nova_compute[348325]: 2025-12-03 18:44:07.606 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:08 compute-0 adoring_dirac[422186]: {
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:    "0": [
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:        {
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "devices": [
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "/dev/loop3"
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            ],
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_name": "ceph_lv0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_size": "21470642176",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "name": "ceph_lv0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "tags": {
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.cluster_name": "ceph",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.crush_device_class": "",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.encrypted": "0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.osd_id": "0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.type": "block",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.vdo": "0"
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            },
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "type": "block",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "vg_name": "ceph_vg0"
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:        }
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:    ],
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:    "1": [
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:        {
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "devices": [
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "/dev/loop4"
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            ],
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_name": "ceph_lv1",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_size": "21470642176",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "name": "ceph_lv1",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "tags": {
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.cluster_name": "ceph",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.crush_device_class": "",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.encrypted": "0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.osd_id": "1",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.type": "block",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.vdo": "0"
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            },
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "type": "block",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "vg_name": "ceph_vg1"
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:        }
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:    ],
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:    "2": [
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:        {
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "devices": [
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "/dev/loop5"
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            ],
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_name": "ceph_lv2",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_size": "21470642176",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "name": "ceph_lv2",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "tags": {
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.cluster_name": "ceph",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.crush_device_class": "",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.encrypted": "0",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.osd_id": "2",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.type": "block",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:                "ceph.vdo": "0"
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            },
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "type": "block",
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:            "vg_name": "ceph_vg2"
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:        }
Dec  3 18:44:08 compute-0 adoring_dirac[422186]:    ]
Dec  3 18:44:08 compute-0 adoring_dirac[422186]: }
Dec  3 18:44:08 compute-0 nova_compute[348325]: 2025-12-03 18:44:08.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:08 compute-0 nova_compute[348325]: 2025-12-03 18:44:08.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:44:08 compute-0 nova_compute[348325]: 2025-12-03 18:44:08.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:44:08 compute-0 systemd[1]: libpod-918481f324575ff7fdac51f043844fdc86229c71b3111dc875e434a0d5ee5951.scope: Deactivated successfully.
Dec  3 18:44:08 compute-0 podman[422196]: 2025-12-03 18:44:08.592704117 +0000 UTC m=+0.068091748 container died 918481f324575ff7fdac51f043844fdc86229c71b3111dc875e434a0d5ee5951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec  3 18:44:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6846537b26a01fdacf2f9277d4bbaa969c7b05375b0f1c1dd884f288135da37-merged.mount: Deactivated successfully.
Dec  3 18:44:08 compute-0 podman[422196]: 2025-12-03 18:44:08.68643592 +0000 UTC m=+0.161823501 container remove 918481f324575ff7fdac51f043844fdc86229c71b3111dc875e434a0d5ee5951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:44:08 compute-0 systemd[1]: libpod-conmon-918481f324575ff7fdac51f043844fdc86229c71b3111dc875e434a0d5ee5951.scope: Deactivated successfully.
Dec  3 18:44:09 compute-0 nova_compute[348325]: 2025-12-03 18:44:09.078 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:44:09 compute-0 nova_compute[348325]: 2025-12-03 18:44:09.079 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:44:09 compute-0 nova_compute[348325]: 2025-12-03 18:44:09.079 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:44:09 compute-0 nova_compute[348325]: 2025-12-03 18:44:09.080 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ca1fbdb-089c-4544-821e-0542089b8424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:44:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:09 compute-0 podman[422308]: 2025-12-03 18:44:09.356615989 +0000 UTC m=+0.100701595 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  3 18:44:09 compute-0 podman[422329]: 2025-12-03 18:44:09.547209103 +0000 UTC m=+0.158740456 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:44:09 compute-0 podman[422393]: 2025-12-03 18:44:09.744812258 +0000 UTC m=+0.077837906 container create 66921461778f890bf031d2133c5776cb389daa23761240359cdc9c26f41db597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goodall, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:44:09 compute-0 podman[422393]: 2025-12-03 18:44:09.710247282 +0000 UTC m=+0.043272910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:44:09 compute-0 systemd[1]: Started libpod-conmon-66921461778f890bf031d2133c5776cb389daa23761240359cdc9c26f41db597.scope.
Dec  3 18:44:09 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:44:09 compute-0 podman[422393]: 2025-12-03 18:44:09.886624537 +0000 UTC m=+0.219650145 container init 66921461778f890bf031d2133c5776cb389daa23761240359cdc9c26f41db597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goodall, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:44:09 compute-0 podman[422393]: 2025-12-03 18:44:09.900234811 +0000 UTC m=+0.233260429 container start 66921461778f890bf031d2133c5776cb389daa23761240359cdc9c26f41db597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goodall, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:44:09 compute-0 podman[422393]: 2025-12-03 18:44:09.905291575 +0000 UTC m=+0.238317203 container attach 66921461778f890bf031d2133c5776cb389daa23761240359cdc9c26f41db597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:44:09 compute-0 wizardly_goodall[422409]: 167 167
Dec  3 18:44:09 compute-0 systemd[1]: libpod-66921461778f890bf031d2133c5776cb389daa23761240359cdc9c26f41db597.scope: Deactivated successfully.
Dec  3 18:44:09 compute-0 podman[422393]: 2025-12-03 18:44:09.910309327 +0000 UTC m=+0.243334975 container died 66921461778f890bf031d2133c5776cb389daa23761240359cdc9c26f41db597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goodall, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e393eefee10f49b8a24c72b364546694c65d7b4bc0dc3bfbd66847d99f4e4603-merged.mount: Deactivated successfully.
Dec  3 18:44:09 compute-0 podman[422393]: 2025-12-03 18:44:09.968633144 +0000 UTC m=+0.301658772 container remove 66921461778f890bf031d2133c5776cb389daa23761240359cdc9c26f41db597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goodall, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:44:10 compute-0 systemd[1]: libpod-conmon-66921461778f890bf031d2133c5776cb389daa23761240359cdc9c26f41db597.scope: Deactivated successfully.
Dec  3 18:44:10 compute-0 podman[422431]: 2025-12-03 18:44:10.214655563 +0000 UTC m=+0.071382097 container create 7d177bfd267508368de9cee92619a3f66a19bf32acde2c6a92977527c9c952cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 18:44:10 compute-0 systemd[1]: Started libpod-conmon-7d177bfd267508368de9cee92619a3f66a19bf32acde2c6a92977527c9c952cc.scope.
Dec  3 18:44:10 compute-0 podman[422431]: 2025-12-03 18:44:10.19240155 +0000 UTC m=+0.049128104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:44:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf41039de9cc91e0a54a6fa6ccfde744607ca7b96c000acaa7884f89620c6e0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf41039de9cc91e0a54a6fa6ccfde744607ca7b96c000acaa7884f89620c6e0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf41039de9cc91e0a54a6fa6ccfde744607ca7b96c000acaa7884f89620c6e0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf41039de9cc91e0a54a6fa6ccfde744607ca7b96c000acaa7884f89620c6e0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:44:10 compute-0 podman[422431]: 2025-12-03 18:44:10.371071311 +0000 UTC m=+0.227797845 container init 7d177bfd267508368de9cee92619a3f66a19bf32acde2c6a92977527c9c952cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:44:10 compute-0 podman[422431]: 2025-12-03 18:44:10.385400021 +0000 UTC m=+0.242126535 container start 7d177bfd267508368de9cee92619a3f66a19bf32acde2c6a92977527c9c952cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:44:10 compute-0 podman[422431]: 2025-12-03 18:44:10.389878192 +0000 UTC m=+0.246604746 container attach 7d177bfd267508368de9cee92619a3f66a19bf32acde2c6a92977527c9c952cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:44:11 compute-0 nova_compute[348325]: 2025-12-03 18:44:11.185 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:44:11 compute-0 nova_compute[348325]: 2025-12-03 18:44:11.216 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:44:11 compute-0 nova_compute[348325]: 2025-12-03 18:44:11.216 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:44:11 compute-0 nova_compute[348325]: 2025-12-03 18:44:11.217 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:11 compute-0 nova_compute[348325]: 2025-12-03 18:44:11.217 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:11 compute-0 nova_compute[348325]: 2025-12-03 18:44:11.217 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:11 compute-0 nova_compute[348325]: 2025-12-03 18:44:11.217 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:44:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]: {
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "osd_id": 1,
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "type": "bluestore"
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:    },
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "osd_id": 2,
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "type": "bluestore"
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:    },
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "osd_id": 0,
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:        "type": "bluestore"
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]:    }
Dec  3 18:44:11 compute-0 compassionate_grothendieck[422447]: }
Dec  3 18:44:11 compute-0 systemd[1]: libpod-7d177bfd267508368de9cee92619a3f66a19bf32acde2c6a92977527c9c952cc.scope: Deactivated successfully.
Dec  3 18:44:11 compute-0 podman[422431]: 2025-12-03 18:44:11.44958848 +0000 UTC m=+1.306315024 container died 7d177bfd267508368de9cee92619a3f66a19bf32acde2c6a92977527c9c952cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 18:44:11 compute-0 systemd[1]: libpod-7d177bfd267508368de9cee92619a3f66a19bf32acde2c6a92977527c9c952cc.scope: Consumed 1.051s CPU time.
Dec  3 18:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf41039de9cc91e0a54a6fa6ccfde744607ca7b96c000acaa7884f89620c6e0b-merged.mount: Deactivated successfully.
Dec  3 18:44:11 compute-0 podman[422431]: 2025-12-03 18:44:11.533581495 +0000 UTC m=+1.390308019 container remove 7d177bfd267508368de9cee92619a3f66a19bf32acde2c6a92977527c9c952cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:44:11 compute-0 systemd[1]: libpod-conmon-7d177bfd267508368de9cee92619a3f66a19bf32acde2c6a92977527c9c952cc.scope: Deactivated successfully.
Dec  3 18:44:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:44:11 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:44:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:44:11 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:44:11 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9428c574-a5b6-4fbe-a4c5-ab0b1d5860bf does not exist
Dec  3 18:44:11 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev dbf7a3a2-6a3f-4a87-88bb-4dc7e0b418d2 does not exist
Dec  3 18:44:11 compute-0 nova_compute[348325]: 2025-12-03 18:44:11.794 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:12 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:44:12 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:44:12 compute-0 nova_compute[348325]: 2025-12-03 18:44:12.609 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:13 compute-0 nova_compute[348325]: 2025-12-03 18:44:13.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:13 compute-0 nova_compute[348325]: 2025-12-03 18:44:13.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 18:44:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:44:13
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'backups']
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:44:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:44:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:44:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:44:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:44:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:44:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:44:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:44:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:44:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:44:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:44:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:44:14 compute-0 nova_compute[348325]: 2025-12-03 18:44:14.494 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:16 compute-0 nova_compute[348325]: 2025-12-03 18:44:16.799 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:17 compute-0 nova_compute[348325]: 2025-12-03 18:44:17.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:17 compute-0 nova_compute[348325]: 2025-12-03 18:44:17.514 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:17 compute-0 nova_compute[348325]: 2025-12-03 18:44:17.515 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:17 compute-0 nova_compute[348325]: 2025-12-03 18:44:17.516 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:17 compute-0 nova_compute[348325]: 2025-12-03 18:44:17.516 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:44:17 compute-0 nova_compute[348325]: 2025-12-03 18:44:17.517 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:17 compute-0 nova_compute[348325]: 2025-12-03 18:44:17.612 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:44:18 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555219739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.025 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.129 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.129 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.129 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.134 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.134 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.134 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.139 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.139 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.140 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.569 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.571 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3501MB free_disk=59.88883972167969GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.571 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.572 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.835 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.836 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance df72d527-943e-4e8c-b62a-63afa5f18261 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.837 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance de3992c5-c1ad-4da3-9276-954d6365c3c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.838 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:44:18 compute-0 nova_compute[348325]: 2025-12-03 18:44:18.838 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:44:19 compute-0 nova_compute[348325]: 2025-12-03 18:44:19.074 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:19 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:19.288 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:44:19 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:19.289 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:44:19 compute-0 nova_compute[348325]: 2025-12-03 18:44:19.289 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:44:19 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133570676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:44:19 compute-0 nova_compute[348325]: 2025-12-03 18:44:19.522 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:19 compute-0 nova_compute[348325]: 2025-12-03 18:44:19.531 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:44:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:44:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 6655 writes, 26K keys, 6655 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6655 writes, 1326 syncs, 5.02 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 843 writes, 2651 keys, 843 commit groups, 1.0 writes per commit group, ingest: 2.74 MB, 0.00 MB/s#012Interval WAL: 843 writes, 348 syncs, 2.42 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 18:44:19 compute-0 nova_compute[348325]: 2025-12-03 18:44:19.558 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:44:19 compute-0 nova_compute[348325]: 2025-12-03 18:44:19.560 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:44:19 compute-0 nova_compute[348325]: 2025-12-03 18:44:19.561 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:21 compute-0 nova_compute[348325]: 2025-12-03 18:44:21.802 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:22 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:22.292 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:44:22 compute-0 nova_compute[348325]: 2025-12-03 18:44:22.614 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:23.339 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:23.340 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:23.341 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:23 compute-0 podman[422590]: 2025-12-03 18:44:23.961817741 +0000 UTC m=+0.108898685 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:44:23 compute-0 podman[422591]: 2025-12-03 18:44:23.964253061 +0000 UTC m=+0.100625463 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, release=1755695350, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal)
Dec  3 18:44:23 compute-0 podman[422589]: 2025-12-03 18:44:23.976118011 +0000 UTC m=+0.122236982 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016576825714657811 of space, bias 1.0, pg target 0.49730477143973434 quantized to 32 (current 32)
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:44:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.612 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.653 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Triggering sync for uuid 1ca1fbdb-089c-4544-821e-0542089b8424 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.654 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Triggering sync for uuid df72d527-943e-4e8c-b62a-63afa5f18261 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.654 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Triggering sync for uuid de3992c5-c1ad-4da3-9276-954d6365c3c9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.654 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "1ca1fbdb-089c-4544-821e-0542089b8424" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.655 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "1ca1fbdb-089c-4544-821e-0542089b8424" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.656 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "df72d527-943e-4e8c-b62a-63afa5f18261" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.657 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "df72d527-943e-4e8c-b62a-63afa5f18261" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.657 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "de3992c5-c1ad-4da3-9276-954d6365c3c9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.658 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.737 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "1ca1fbdb-089c-4544-821e-0542089b8424" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.740 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "df72d527-943e-4e8c-b62a-63afa5f18261" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:24 compute-0 nova_compute[348325]: 2025-12-03 18:44:24.753 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:25 compute-0 nova_compute[348325]: 2025-12-03 18:44:25.235 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:25 compute-0 nova_compute[348325]: 2025-12-03 18:44:25.237 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:25 compute-0 nova_compute[348325]: 2025-12-03 18:44:25.258 348329 DEBUG nova.compute.manager [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:44:25 compute-0 nova_compute[348325]: 2025-12-03 18:44:25.369 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:25 compute-0 nova_compute[348325]: 2025-12-03 18:44:25.371 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:25 compute-0 nova_compute[348325]: 2025-12-03 18:44:25.382 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:44:25 compute-0 nova_compute[348325]: 2025-12-03 18:44:25.383 348329 INFO nova.compute.claims [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:44:25 compute-0 nova_compute[348325]: 2025-12-03 18:44:25.582 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:25 compute-0 podman[422670]: 2025-12-03 18:44:25.941022348 +0000 UTC m=+0.108193158 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9, release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git)
Dec  3 18:44:25 compute-0 podman[422672]: 2025-12-03 18:44:25.947749693 +0000 UTC m=+0.095250302 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:44:25 compute-0 podman[422671]: 2025-12-03 18:44:25.951855614 +0000 UTC m=+0.109558292 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  3 18:44:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:44:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760647020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.084 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.095 348329 DEBUG nova.compute.provider_tree [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.130 348329 DEBUG nova.scheduler.client.report [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.168 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.169 348329 DEBUG nova.compute.manager [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.224 348329 DEBUG nova.compute.manager [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.225 348329 DEBUG nova.network.neutron [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.252 348329 INFO nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.301 348329 DEBUG nova.compute.manager [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.427 348329 DEBUG nova.compute.manager [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.428 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.428 348329 INFO nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Creating image(s)#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.454 348329 DEBUG nova.storage.rbd_utils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.488 348329 DEBUG nova.storage.rbd_utils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.524 348329 DEBUG nova.storage.rbd_utils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.530 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.598 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.598 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "2a1fd6462a2f789b92c02c5037b663e095546067" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.599 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "2a1fd6462a2f789b92c02c5037b663e095546067" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.599 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "2a1fd6462a2f789b92c02c5037b663e095546067" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.632 348329 DEBUG nova.storage.rbd_utils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.642 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.806 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:26 compute-0 nova_compute[348325]: 2025-12-03 18:44:26.974 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.083 348329 DEBUG nova.storage.rbd_utils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] resizing rbd image a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:44:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.244 348329 DEBUG nova.objects.instance [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'migration_context' on Instance uuid a6019a9c-c065-49d8-bef3-219bd2c79d8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.285 348329 DEBUG nova.storage.rbd_utils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.318 348329 DEBUG nova.storage.rbd_utils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.324 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.382 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.383 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.384 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.384 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.417 348329 DEBUG nova.storage.rbd_utils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.423 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.620 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:27 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 18:44:27 compute-0 nova_compute[348325]: 2025-12-03 18:44:27.887 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:28 compute-0 nova_compute[348325]: 2025-12-03 18:44:28.068 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:44:28 compute-0 nova_compute[348325]: 2025-12-03 18:44:28.069 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Ensure instance console log exists: /var/lib/nova/instances/a6019a9c-c065-49d8-bef3-219bd2c79d8c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:44:28 compute-0 nova_compute[348325]: 2025-12-03 18:44:28.070 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:28 compute-0 nova_compute[348325]: 2025-12-03 18:44:28.071 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:28 compute-0 nova_compute[348325]: 2025-12-03 18:44:28.072 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:44:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 7743 writes, 30K keys, 7743 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7743 writes, 1643 syncs, 4.71 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 787 writes, 2642 keys, 787 commit groups, 1.0 writes per commit group, ingest: 2.91 MB, 0.00 MB/s#012Interval WAL: 787 writes, 315 syncs, 2.50 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 18:44:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 207 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 355 KiB/s wr, 15 op/s
Dec  3 18:44:29 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 18:44:29 compute-0 podman[158200]: time="2025-12-03T18:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:44:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:44:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8639 "" "Go-http-client/1.1"
Dec  3 18:44:30 compute-0 nova_compute[348325]: 2025-12-03 18:44:30.040 348329 DEBUG nova.network.neutron [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Successfully updated port: bdba7a40-8840-4832-a614-279c23eb82ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 18:44:30 compute-0 nova_compute[348325]: 2025-12-03 18:44:30.071 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:44:30 compute-0 nova_compute[348325]: 2025-12-03 18:44:30.072 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquired lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:44:30 compute-0 nova_compute[348325]: 2025-12-03 18:44:30.072 348329 DEBUG nova.network.neutron [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:44:30 compute-0 nova_compute[348325]: 2025-12-03 18:44:30.187 348329 DEBUG nova.compute.manager [req-4afa4734-347b-4892-93c0-f1bd88564ac6 req-79cb9552-7eb5-4f5a-9517-9dda6559c10d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Received event network-changed-bdba7a40-8840-4832-a614-279c23eb82ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:44:30 compute-0 nova_compute[348325]: 2025-12-03 18:44:30.188 348329 DEBUG nova.compute.manager [req-4afa4734-347b-4892-93c0-f1bd88564ac6 req-79cb9552-7eb5-4f5a-9517-9dda6559c10d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Refreshing instance network info cache due to event network-changed-bdba7a40-8840-4832-a614-279c23eb82ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:44:30 compute-0 nova_compute[348325]: 2025-12-03 18:44:30.188 348329 DEBUG oslo_concurrency.lockutils [req-4afa4734-347b-4892-93c0-f1bd88564ac6 req-79cb9552-7eb5-4f5a-9517-9dda6559c10d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:44:30 compute-0 nova_compute[348325]: 2025-12-03 18:44:30.222 348329 DEBUG nova.network.neutron [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:44:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 225 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 MiB/s wr, 18 op/s
Dec  3 18:44:31 compute-0 openstack_network_exporter[365222]: ERROR   18:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:44:31 compute-0 openstack_network_exporter[365222]: ERROR   18:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:44:31 compute-0 openstack_network_exporter[365222]: ERROR   18:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:44:31 compute-0 openstack_network_exporter[365222]: ERROR   18:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:44:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:44:31 compute-0 openstack_network_exporter[365222]: ERROR   18:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:44:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:44:31 compute-0 nova_compute[348325]: 2025-12-03 18:44:31.809 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.105 348329 DEBUG nova.network.neutron [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Updating instance_info_cache with network_info: [{"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.142 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Releasing lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.143 348329 DEBUG nova.compute.manager [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Instance network_info: |[{"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.143 348329 DEBUG oslo_concurrency.lockutils [req-4afa4734-347b-4892-93c0-f1bd88564ac6 req-79cb9552-7eb5-4f5a-9517-9dda6559c10d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.144 348329 DEBUG nova.network.neutron [req-4afa4734-347b-4892-93c0-f1bd88564ac6 req-79cb9552-7eb5-4f5a-9517-9dda6559c10d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Refreshing network info cache for port bdba7a40-8840-4832-a614-279c23eb82ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.146 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Start _get_guest_xml network_info=[{"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T18:35:07Z,direct_url=<?>,disk_format='qcow2',id=e68cd467-b4e6-45e0-8e55-984fda402294,min_disk=0,min_ram=0,name='cirros',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T18:35:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.153 348329 WARNING nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.161 348329 DEBUG nova.virt.libvirt.host [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.162 348329 DEBUG nova.virt.libvirt.host [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.175 348329 DEBUG nova.virt.libvirt.host [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.177 348329 DEBUG nova.virt.libvirt.host [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.178 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.179 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:35:14Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='6cb250a4-d28c-4125-888b-653b31e29275',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T18:35:07Z,direct_url=<?>,disk_format='qcow2',id=e68cd467-b4e6-45e0-8e55-984fda402294,min_disk=0,min_ram=0,name='cirros',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T18:35:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.180 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.181 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.182 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.183 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.184 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.185 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.186 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.187 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.188 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.189 348329 DEBUG nova.virt.hardware [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.194 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.621 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:44:32 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017387331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.663 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:32 compute-0 nova_compute[348325]: 2025-12-03 18:44:32.664 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:44:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3912995006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.129 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.166 348329 DEBUG nova.storage.rbd_utils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.174 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  3 18:44:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:44:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3558028151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.647 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.649 348329 DEBUG nova.virt.libvirt.vif [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:44:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm',id=4,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b322e118-e1cc-40be-8d8c-553648144092'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-xha30ma8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:44:26Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM0OTkyMjMyNzQwOTEwMTMyOTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzQ5OTIyMzI3NDA5MTAxMzI5OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM0OTkyMjMyNzQwOTEwMTMyOTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  3 18:44:33 compute-0 nova_compute[348325]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzQ5OTIyMzI3NDA5MTAxMzI5OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM0OTkyMjMyNzQwOTEwMTMyOTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0tLQo=',user_id='56338958b09445f5af9aa9e4601a1a8a',uuid=a6019a9c-c065-49d8-bef3-219bd2c79d8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.649 348329 DEBUG nova.network.os_vif_util [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.649 348329 DEBUG nova.network.os_vif_util [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:41:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdba7a40-8840-4832-a614-279c23eb82ca,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbdba7a40-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.650 348329 DEBUG nova.objects.instance [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6019a9c-c065-49d8-bef3-219bd2c79d8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.662 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:44:33 compute-0 nova_compute[348325]:  <uuid>a6019a9c-c065-49d8-bef3-219bd2c79d8c</uuid>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  <name>instance-00000004</name>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  <memory>524288</memory>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <nova:name>vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm</nova:name>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:44:32</nova:creationTime>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <nova:flavor name="m1.small">
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <nova:memory>512</nova:memory>
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <nova:ephemeral>1</nova:ephemeral>
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <nova:user uuid="56338958b09445f5af9aa9e4601a1a8a">admin</nova:user>
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <nova:project uuid="d2770200bdb2436c90142fa2e5ddcd47">admin</nova:project>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="e68cd467-b4e6-45e0-8e55-984fda402294"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <nova:port uuid="bdba7a40-8840-4832-a614-279c23eb82ca">
Dec  3 18:44:33 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="192.168.0.189" ipVersion="4"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <system>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <entry name="serial">a6019a9c-c065-49d8-bef3-219bd2c79d8c</entry>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <entry name="uuid">a6019a9c-c065-49d8-bef3-219bd2c79d8c</entry>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    </system>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  <os>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  </os>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  <features>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  </features>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk">
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      </source>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.eph0">
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      </source>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <target dev="vdb" bus="virtio"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.config">
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      </source>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:44:33 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:93:41:b2"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <target dev="tapbdba7a40-88"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/a6019a9c-c065-49d8-bef3-219bd2c79d8c/console.log" append="off"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <video>
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    </video>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:44:33 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:44:33 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:44:33 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:44:33 compute-0 nova_compute[348325]: </domain>
Dec  3 18:44:33 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.662 348329 DEBUG nova.compute.manager [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Preparing to wait for external event network-vif-plugged-bdba7a40-8840-4832-a614-279c23eb82ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.662 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.662 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.663 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.663 348329 DEBUG nova.virt.libvirt.vif [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:44:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm',id=4,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b322e118-e1cc-40be-8d8c-553648144092'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-xha30ma8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:44:26Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM0OTkyMjMyNzQwOTEwMTMyOTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzQ5OTIyMzI3NDA5MTAxMzI5OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM0OTkyMjMyNzQwOTEwMTMyOTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  3 18:44:33 compute-0 nova_compute[348325]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzQ5OTIyMzI3NDA5MTAxMzI5OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM0OTkyMjMyNzQwOTEwMTMyOTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0tLQo=',user_id='56338958b09445f5af9aa9e4601a1a8a',uuid=a6019a9c-c065-49d8-bef3-219bd2c79d8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.663 348329 DEBUG nova.network.os_vif_util [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.664 348329 DEBUG nova.network.os_vif_util [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:41:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdba7a40-8840-4832-a614-279c23eb82ca,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbdba7a40-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.664 348329 DEBUG os_vif [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:41:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdba7a40-8840-4832-a614-279c23eb82ca,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbdba7a40-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.664 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.665 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.665 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.668 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.668 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbdba7a40-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.668 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbdba7a40-88, col_values=(('external_ids', {'iface-id': 'bdba7a40-8840-4832-a614-279c23eb82ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:41:b2', 'vm-uuid': 'a6019a9c-c065-49d8-bef3-219bd2c79d8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:44:33 compute-0 NetworkManager[49087]: <info>  [1764787473.6706] manager: (tapbdba7a40-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.673 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.678 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.678 348329 INFO os_vif [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:41:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdba7a40-8840-4832-a614-279c23eb82ca,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbdba7a40-88')#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.737 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.737 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.737 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.737 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No VIF found with MAC fa:16:3e:93:41:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.738 348329 INFO nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Using config drive#033[00m
Dec  3 18:44:33 compute-0 nova_compute[348325]: 2025-12-03 18:44:33.768 348329 DEBUG nova.storage.rbd_utils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:44:34 compute-0 rsyslogd[188590]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 18:44:33.649 348329 DEBUG nova.virt.libvirt.vif [None req-d6c07e38-c5 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.113 348329 DEBUG nova.network.neutron [req-4afa4734-347b-4892-93c0-f1bd88564ac6 req-79cb9552-7eb5-4f5a-9517-9dda6559c10d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Updated VIF entry in instance network info cache for port bdba7a40-8840-4832-a614-279c23eb82ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.114 348329 DEBUG nova.network.neutron [req-4afa4734-347b-4892-93c0-f1bd88564ac6 req-79cb9552-7eb5-4f5a-9517-9dda6559c10d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Updating instance_info_cache with network_info: [{"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:44:34 compute-0 rsyslogd[188590]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 18:44:33.663 348329 DEBUG nova.virt.libvirt.vif [None req-d6c07e38-c5 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.157 348329 DEBUG oslo_concurrency.lockutils [req-4afa4734-347b-4892-93c0-f1bd88564ac6 req-79cb9552-7eb5-4f5a-9517-9dda6559c10d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.253 348329 INFO nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Creating config drive at /var/lib/nova/instances/a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.config#033[00m
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.262 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpebendobi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.390 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpebendobi" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.425 348329 DEBUG nova.storage.rbd_utils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.431 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.config a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:44:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:44:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 6698 writes, 27K keys, 6698 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6698 writes, 1312 syncs, 5.11 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 812 writes, 2947 keys, 812 commit groups, 1.0 writes per commit group, ingest: 3.15 MB, 0.01 MB/s#012Interval WAL: 812 writes, 311 syncs, 2.61 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.688 348329 DEBUG oslo_concurrency.processutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.config a6019a9c-c065-49d8-bef3-219bd2c79d8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.688 348329 INFO nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Deleting local config drive /var/lib/nova/instances/a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.config because it was imported into RBD.#033[00m
Dec  3 18:44:34 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 18:44:34 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 18:44:34 compute-0 kernel: tapbdba7a40-88: entered promiscuous mode
Dec  3 18:44:34 compute-0 NetworkManager[49087]: <info>  [1764787474.7917] manager: (tapbdba7a40-88): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.793 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:34 compute-0 ovn_controller[89305]: 2025-12-03T18:44:34Z|00045|binding|INFO|Claiming lport bdba7a40-8840-4832-a614-279c23eb82ca for this chassis.
Dec  3 18:44:34 compute-0 ovn_controller[89305]: 2025-12-03T18:44:34Z|00046|binding|INFO|bdba7a40-8840-4832-a614-279c23eb82ca: Claiming fa:16:3e:93:41:b2 192.168.0.189
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.804 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:41:b2 192.168.0.189'], port_security=['fa:16:3e:93:41:b2 192.168.0.189'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-4jfpq66btob3-zeembfmsdvyd-qc6d57h54o3l-port-r6aaxu66huxr', 'neutron:cidrs': '192.168.0.189/24', 'neutron:device_id': 'a6019a9c-c065-49d8-bef3-219bd2c79d8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-4jfpq66btob3-zeembfmsdvyd-qc6d57h54o3l-port-r6aaxu66huxr', 'neutron:project_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e48052e-a2fd-4fc1-8ebd-22e3b6e0bd66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12999ead-9a54-49b3-a532-a5f8bdddaf16, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=bdba7a40-8840-4832-a614-279c23eb82ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.805 286999 INFO neutron.agent.ovn.metadata.agent [-] Port bdba7a40-8840-4832-a614-279c23eb82ca in datapath 85c8d446-ad7f-4d1b-a311-89b0b07e8aad bound to our chassis#033[00m
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.807 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85c8d446-ad7f-4d1b-a311-89b0b07e8aad#033[00m
Dec  3 18:44:34 compute-0 ovn_controller[89305]: 2025-12-03T18:44:34Z|00047|binding|INFO|Setting lport bdba7a40-8840-4832-a614-279c23eb82ca ovn-installed in OVS
Dec  3 18:44:34 compute-0 ovn_controller[89305]: 2025-12-03T18:44:34Z|00048|binding|INFO|Setting lport bdba7a40-8840-4832-a614-279c23eb82ca up in Southbound
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.818 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.829 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[b34dc9e2-8961-499b-8428-e6e949232514]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:44:34 compute-0 systemd-machined[138702]: New machine qemu-4-instance-00000004.
Dec  3 18:44:34 compute-0 systemd-udevd[423206]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:44:34 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.864 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f01cb3-a883-4264-9730-2a903abe6c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.867 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[cadf186a-f487-408c-bf0c-2351f5ada896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:44:34 compute-0 NetworkManager[49087]: <info>  [1764787474.8689] device (tapbdba7a40-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:44:34 compute-0 NetworkManager[49087]: <info>  [1764787474.8697] device (tapbdba7a40-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.896 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[7149264a-2618-485e-9c96-8a930662bf4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.912 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[97fb040c-6040-408f-9f03-69a7835ecd62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85c8d446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:c1:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527503, 'reachable_time': 41701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423214, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.928 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[94e70c9a-ff4e-436c-a21a-50d03a97370d]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527519, 'tstamp': 527519}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423217, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527523, 'tstamp': 527523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423217, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.930 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85c8d446-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.931 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:34 compute-0 nova_compute[348325]: 2025-12-03 18:44:34.932 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.933 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85c8d446-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.934 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.934 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85c8d446-a0, col_values=(('external_ids', {'iface-id': '4db8340d-afa3-4a82-bd51-bca0a752f53f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:44:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:44:34.934 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:44:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Dec  3 18:44:35 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  3 18:44:35 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.444 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764787475.4437869, a6019a9c-c065-49d8-bef3-219bd2c79d8c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.444 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] VM Started (Lifecycle Event)#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.467 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.472 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764787475.4438887, a6019a9c-c065-49d8-bef3-219bd2c79d8c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.472 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] VM Paused (Lifecycle Event)#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.490 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.495 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.516 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.553 348329 DEBUG nova.compute.manager [req-3248bf1b-1196-45a9-a5a6-c621d6a0f6b1 req-171df6f5-cf2f-472f-a51e-35568edf250b 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Received event network-vif-plugged-bdba7a40-8840-4832-a614-279c23eb82ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.553 348329 DEBUG oslo_concurrency.lockutils [req-3248bf1b-1196-45a9-a5a6-c621d6a0f6b1 req-171df6f5-cf2f-472f-a51e-35568edf250b 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.554 348329 DEBUG oslo_concurrency.lockutils [req-3248bf1b-1196-45a9-a5a6-c621d6a0f6b1 req-171df6f5-cf2f-472f-a51e-35568edf250b 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.554 348329 DEBUG oslo_concurrency.lockutils [req-3248bf1b-1196-45a9-a5a6-c621d6a0f6b1 req-171df6f5-cf2f-472f-a51e-35568edf250b 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.554 348329 DEBUG nova.compute.manager [req-3248bf1b-1196-45a9-a5a6-c621d6a0f6b1 req-171df6f5-cf2f-472f-a51e-35568edf250b 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Processing event network-vif-plugged-bdba7a40-8840-4832-a614-279c23eb82ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.555 348329 DEBUG nova.compute.manager [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.559 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764787475.5595324, a6019a9c-c065-49d8-bef3-219bd2c79d8c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.560 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.562 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.567 348329 INFO nova.virt.libvirt.driver [-] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Instance spawned successfully.#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.567 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.576 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.581 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.591 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.591 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.592 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.593 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.594 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.594 348329 DEBUG nova.virt.libvirt.driver [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.603 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.646 348329 INFO nova.compute.manager [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Took 9.22 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.647 348329 DEBUG nova.compute.manager [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.754 348329 INFO nova.compute.manager [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Took 10.42 seconds to build instance.#033[00m
Dec  3 18:44:35 compute-0 nova_compute[348325]: 2025-12-03 18:44:35.913 348329 DEBUG oslo_concurrency.lockutils [None req-d6c07e38-c549-43ec-a9c8-f9674faef319 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:37 compute-0 podman[423299]: 2025-12-03 18:44:37.124607182 +0000 UTC m=+0.118043909 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:44:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.4 MiB/s wr, 44 op/s
Dec  3 18:44:37 compute-0 ceph-mgr[193091]: [devicehealth INFO root] Check health
Dec  3 18:44:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:44:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2992190078' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:44:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:44:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2992190078' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:44:37 compute-0 nova_compute[348325]: 2025-12-03 18:44:37.623 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:37 compute-0 nova_compute[348325]: 2025-12-03 18:44:37.633 348329 DEBUG nova.compute.manager [req-4a0cc559-8320-4829-a615-1f17448d9ec1 req-0190ab28-5d09-418a-b0d2-494de8b261ec 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Received event network-vif-plugged-bdba7a40-8840-4832-a614-279c23eb82ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:44:37 compute-0 nova_compute[348325]: 2025-12-03 18:44:37.633 348329 DEBUG oslo_concurrency.lockutils [req-4a0cc559-8320-4829-a615-1f17448d9ec1 req-0190ab28-5d09-418a-b0d2-494de8b261ec 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:44:37 compute-0 nova_compute[348325]: 2025-12-03 18:44:37.633 348329 DEBUG oslo_concurrency.lockutils [req-4a0cc559-8320-4829-a615-1f17448d9ec1 req-0190ab28-5d09-418a-b0d2-494de8b261ec 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:44:37 compute-0 nova_compute[348325]: 2025-12-03 18:44:37.634 348329 DEBUG oslo_concurrency.lockutils [req-4a0cc559-8320-4829-a615-1f17448d9ec1 req-0190ab28-5d09-418a-b0d2-494de8b261ec 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:44:37 compute-0 nova_compute[348325]: 2025-12-03 18:44:37.634 348329 DEBUG nova.compute.manager [req-4a0cc559-8320-4829-a615-1f17448d9ec1 req-0190ab28-5d09-418a-b0d2-494de8b261ec 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] No waiting events found dispatching network-vif-plugged-bdba7a40-8840-4832-a614-279c23eb82ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:44:37 compute-0 nova_compute[348325]: 2025-12-03 18:44:37.635 348329 WARNING nova.compute.manager [req-4a0cc559-8320-4829-a615-1f17448d9ec1 req-0190ab28-5d09-418a-b0d2-494de8b261ec 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Received unexpected event network-vif-plugged-bdba7a40-8840-4832-a614-279c23eb82ca for instance with vm_state active and task_state None.#033[00m
Dec  3 18:44:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:38 compute-0 nova_compute[348325]: 2025-12-03 18:44:38.672 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 1.4 MiB/s wr, 56 op/s
Dec  3 18:44:40 compute-0 podman[423322]: 2025-12-03 18:44:40.012338511 +0000 UTC m=+0.162421177 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  3 18:44:40 compute-0 podman[423321]: 2025-12-03 18:44:40.032153136 +0000 UTC m=+0.191036567 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 18:44:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 70 op/s
Dec  3 18:44:42 compute-0 nova_compute[348325]: 2025-12-03 18:44:42.627 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 303 KiB/s wr, 79 op/s
Dec  3 18:44:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:43 compute-0 nova_compute[348325]: 2025-12-03 18:44:43.674 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:44:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:44:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:44:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:44:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:44:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:44:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 60 op/s
Dec  3 18:44:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 59 op/s
Dec  3 18:44:47 compute-0 nova_compute[348325]: 2025-12-03 18:44:47.630 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:48 compute-0 nova_compute[348325]: 2025-12-03 18:44:48.679 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 0 B/s wr, 53 op/s
Dec  3 18:44:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 0 B/s wr, 41 op/s
Dec  3 18:44:52 compute-0 nova_compute[348325]: 2025-12-03 18:44:52.633 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 12 op/s
Dec  3 18:44:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:53 compute-0 nova_compute[348325]: 2025-12-03 18:44:53.684 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:54 compute-0 podman[423366]: 2025-12-03 18:44:54.954847967 +0000 UTC m=+0.114716148 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:44:54 compute-0 podman[423365]: 2025-12-03 18:44:54.959857209 +0000 UTC m=+0.114418341 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 18:44:54 compute-0 podman[423367]: 2025-12-03 18:44:54.96767458 +0000 UTC m=+0.117616368 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_id=edpm)
Dec  3 18:44:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:56 compute-0 podman[423425]: 2025-12-03 18:44:56.925075684 +0000 UTC m=+0.082396217 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 18:44:56 compute-0 podman[423426]: 2025-12-03 18:44:56.933625233 +0000 UTC m=+0.082773926 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:44:56 compute-0 podman[423424]: 2025-12-03 18:44:56.935155401 +0000 UTC m=+0.100163503 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 18:44:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:57 compute-0 nova_compute[348325]: 2025-12-03 18:44:57.636 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:44:58 compute-0 nova_compute[348325]: 2025-12-03 18:44:58.688 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:44:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:44:59 compute-0 podman[158200]: time="2025-12-03T18:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:44:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:44:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8640 "" "Go-http-client/1.1"
Dec  3 18:45:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:45:01 compute-0 openstack_network_exporter[365222]: ERROR   18:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:45:01 compute-0 openstack_network_exporter[365222]: ERROR   18:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:45:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:45:01 compute-0 openstack_network_exporter[365222]: ERROR   18:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:45:01 compute-0 openstack_network_exporter[365222]: ERROR   18:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:45:01 compute-0 openstack_network_exporter[365222]: ERROR   18:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:45:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:45:02 compute-0 nova_compute[348325]: 2025-12-03 18:45:02.639 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:45:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:03 compute-0 nova_compute[348325]: 2025-12-03 18:45:03.693 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:04 compute-0 ovn_controller[89305]: 2025-12-03T18:45:04Z|00049|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  3 18:45:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 18:45:06 compute-0 nova_compute[348325]: 2025-12-03 18:45:06.522 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:45:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 18:45:07 compute-0 nova_compute[348325]: 2025-12-03 18:45:07.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:45:07 compute-0 nova_compute[348325]: 2025-12-03 18:45:07.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:45:07 compute-0 nova_compute[348325]: 2025-12-03 18:45:07.644 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:08 compute-0 podman[423481]: 2025-12-03 18:45:08.003202475 +0000 UTC m=+0.160433037 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:45:08 compute-0 nova_compute[348325]: 2025-12-03 18:45:08.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:45:08 compute-0 nova_compute[348325]: 2025-12-03 18:45:08.490 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:45:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:08 compute-0 nova_compute[348325]: 2025-12-03 18:45:08.697 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 18:45:09 compute-0 nova_compute[348325]: 2025-12-03 18:45:09.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:45:09 compute-0 nova_compute[348325]: 2025-12-03 18:45:09.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:45:09 compute-0 nova_compute[348325]: 2025-12-03 18:45:09.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:45:10 compute-0 nova_compute[348325]: 2025-12-03 18:45:10.489 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:45:10 compute-0 nova_compute[348325]: 2025-12-03 18:45:10.490 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:45:10 compute-0 podman[423506]: 2025-12-03 18:45:10.986475041 +0000 UTC m=+0.122334375 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  3 18:45:11 compute-0 podman[423505]: 2025-12-03 18:45:11.056243377 +0000 UTC m=+0.210639324 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:45:11 compute-0 nova_compute[348325]: 2025-12-03 18:45:11.109 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:45:11 compute-0 nova_compute[348325]: 2025-12-03 18:45:11.110 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:45:11 compute-0 nova_compute[348325]: 2025-12-03 18:45:11.110 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:45:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 18:45:12 compute-0 nova_compute[348325]: 2025-12-03 18:45:12.647 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:45:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:45:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:45:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:45:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:45:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:45:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f2f3a7e7-56fa-4794-b66c-6d5a67fdf182 does not exist
Dec  3 18:45:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 2dd40937-e0b0-4aa0-a2fe-9913cd93daf8 does not exist
Dec  3 18:45:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 8888deb1-a308-46fa-958f-602745b70cf7 does not exist
Dec  3 18:45:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:45:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:45:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:45:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:45:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:45:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:45:13 compute-0 nova_compute[348325]: 2025-12-03 18:45:13.134 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updating instance_info_cache with network_info: [{"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:45:13 compute-0 nova_compute[348325]: 2025-12-03 18:45:13.156 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:45:13 compute-0 nova_compute[348325]: 2025-12-03 18:45:13.156 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.249 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.250 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.260 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1ca1fbdb-089c-4544-821e-0542089b8424', 'name': 'test_0', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 341 B/s wr, 2 op/s
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.264 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'df72d527-943e-4e8c-b62a-63afa5f18261', 'name': 'vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.267 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a6019a9c-c065-49d8-bef3-219bd2c79d8c from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.270 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a6019a9c-c065-49d8-bef3-219bd2c79d8c -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 18:45:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:45:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:45:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:45:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:13 compute-0 nova_compute[348325]: 2025-12-03 18:45:13.700 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.805 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 03 Dec 2025 18:45:13 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-413d3d67-26f9-4b58-b7f6-6258b91c4993 x-openstack-request-id: req-413d3d67-26f9-4b58-b7f6-6258b91c4993 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.806 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a6019a9c-c065-49d8-bef3-219bd2c79d8c", "name": "vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm", "status": "ACTIVE", "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "user_id": "56338958b09445f5af9aa9e4601a1a8a", "metadata": {"metering.server_group": "b322e118-e1cc-40be-8d8c-553648144092"}, "hostId": "233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878", "image": {"id": "e68cd467-b4e6-45e0-8e55-984fda402294", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e68cd467-b4e6-45e0-8e55-984fda402294"}]}, "flavor": {"id": "6cb250a4-d28c-4125-888b-653b31e29275", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/6cb250a4-d28c-4125-888b-653b31e29275"}]}, "created": "2025-12-03T18:44:23Z", "updated": "2025-12-03T18:44:35Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.189", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:93:41:b2"}, {"version": 4, "addr": "192.168.122.206", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:93:41:b2"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a6019a9c-c065-49d8-bef3-219bd2c79d8c"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a6019a9c-c065-49d8-bef3-219bd2c79d8c"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T18:44:35.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.806 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a6019a9c-c065-49d8-bef3-219bd2c79d8c used request id req-413d3d67-26f9-4b58-b7f6-6258b91c4993 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.808 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6019a9c-c065-49d8-bef3-219bd2c79d8c', 'name': 'vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.812 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'de3992c5-c1ad-4da3-9276-954d6365c3c9', 'name': 'vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.813 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.813 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.813 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.813 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.814 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T18:45:13.813431) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.818 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.823 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.827 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a6019a9c-c065-49d8-bef3-219bd2c79d8c / tapbdba7a40-88 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.827 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.830 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.831 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.832 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.832 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.bytes volume: 7146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.832 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.833 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.833 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.833 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.834 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T18:45:13.832200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.834 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets volume: 59 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T18:45:13.834351) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.835 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.835 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.836 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.836 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.836 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.837 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.bytes.delta volume: 2424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.837 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.837 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.bytes.delta volume: 620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T18:45:13.836777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.838 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.838 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.838 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.838 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.838 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.838 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm>]
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.839 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.839 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.839 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.839 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.839 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.840 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.bytes volume: 8280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.840 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.840 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T18:45:13.838392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.840 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.841 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.841 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.841 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.842 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T18:45:13.839724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.842 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T18:45:13.841364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.863 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.864 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.864 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ovn_controller[89305]: 2025-12-03T18:45:13Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:41:b2 192.168.0.189
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.892 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.892 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.893 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ovn_controller[89305]: 2025-12-03T18:45:13Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:41:b2 192.168.0.189
Dec  3 18:45:13 compute-0 podman[423821]: 2025-12-03 18:45:13.896201056 +0000 UTC m=+0.064817848 container create 0b5d1fa0b4524b36056e9fe1d558df7464164658065e841be2106aea17a53b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.919 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.920 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.920 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:45:13
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', 'images', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta']
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.956 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.957 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.957 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 systemd[1]: Started libpod-conmon-0b5d1fa0b4524b36056e9fe1d558df7464164658065e841be2106aea17a53b16.scope.
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.959 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.959 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.960 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.961 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T18:45:13.960862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.962 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.962 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.963 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.963 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.964 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.964 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.965 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.965 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T18:45:13.965209) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:13 compute-0 podman[423821]: 2025-12-03 18:45:13.872607729 +0000 UTC m=+0.041224541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:45:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:45:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:45:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:13.996 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 podman[423821]: 2025-12-03 18:45:14.019137864 +0000 UTC m=+0.187754706 container init 0b5d1fa0b4524b36056e9fe1d558df7464164658065e841be2106aea17a53b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.020 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 podman[423821]: 2025-12-03 18:45:14.031984199 +0000 UTC m=+0.200600991 container start 0b5d1fa0b4524b36056e9fe1d558df7464164658065e841be2106aea17a53b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 18:45:14 compute-0 podman[423821]: 2025-12-03 18:45:14.037693208 +0000 UTC m=+0.206310040 container attach 0b5d1fa0b4524b36056e9fe1d558df7464164658065e841be2106aea17a53b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:45:14 compute-0 practical_pare[423836]: 167 167
Dec  3 18:45:14 compute-0 systemd[1]: libpod-0b5d1fa0b4524b36056e9fe1d558df7464164658065e841be2106aea17a53b16.scope: Deactivated successfully.
Dec  3 18:45:14 compute-0 conmon[423836]: conmon 0b5d1fa0b4524b36056e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b5d1fa0b4524b36056e9fe1d558df7464164658065e841be2106aea17a53b16.scope/container/memory.events
Dec  3 18:45:14 compute-0 podman[423821]: 2025-12-03 18:45:14.043292375 +0000 UTC m=+0.211909167 container died 0b5d1fa0b4524b36056e9fe1d558df7464164658065e841be2106aea17a53b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.047 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/memory.usage volume: 33.3203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f049020a48ee6b1384cc43b69443a9b2206eb7aef042cb4a54a29d58626aac-merged.mount: Deactivated successfully.
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.081 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/memory.usage volume: 49.109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.084 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T18:45:14.084517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.085 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.085 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.086 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.087 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T18:45:14.087936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.088 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.088 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.089 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.089 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.089 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.090 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.090 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.091 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.091 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.091 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.092 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.092 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.093 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.093 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.094 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.094 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T18:45:14.094497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 podman[423821]: 2025-12-03 18:45:14.102028532 +0000 UTC m=+0.270645324 container remove 0b5d1fa0b4524b36056e9fe1d558df7464164658065e841be2106aea17a53b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:45:14 compute-0 systemd[1]: libpod-conmon-0b5d1fa0b4524b36056e9fe1d558df7464164658065e841be2106aea17a53b16.scope: Deactivated successfully.
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.185 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.186 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.186 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.260 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.261 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.261 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.335 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 21199872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.335 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 2160128 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.336 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 328014 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:45:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:45:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:45:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:45:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:45:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:45:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:45:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:45:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:45:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:45:14 compute-0 podman[423858]: 2025-12-03 18:45:14.382882234 +0000 UTC m=+0.097388074 container create 5d5b6a6e2d40a52afb8a9f5cb1a2220f0674608e3b51330ac0cfe619955df7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.416 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.417 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.417 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.418 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.418 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.418 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.418 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.418 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.418 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.418 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm>]
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.419 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T18:45:14.418663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.419 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.419 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 1682579508 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.420 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 260360075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.420 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 147233249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.420 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 1698039964 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.420 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 224294548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.421 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T18:45:14.419758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.421 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 159520694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.421 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 1173010084 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.421 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 141905508 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.421 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 139501476 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.422 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 1270610173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.422 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 182054323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.422 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 131449970 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.423 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.423 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.423 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.423 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.423 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.423 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.423 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.423 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.424 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.424 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T18:45:14.423531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.424 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.424 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.424 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.425 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 719 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.425 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 114 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.425 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.425 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 podman[423858]: 2025-12-03 18:45:14.343939491 +0000 UTC m=+0.058445371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.425 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.426 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.426 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.427 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.427 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.427 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.427 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.428 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.428 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T18:45:14.427122) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.428 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.428 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.429 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.429 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.429 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.429 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.430 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.430 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.430 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.430 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T18:45:14.430746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.431 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.431 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.431 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.432 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.432 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.432 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 17735680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.432 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.432 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.433 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.433 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.433 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.434 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T18:45:14.435148) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.437 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.437 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.437 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.438 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.438 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.439 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.439 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.439 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.439 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 6303799002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.439 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 23959545 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.439 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.440 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 10024888984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.440 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 29522381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.440 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.440 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 4739858551 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.441 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 24431067 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.442 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.442 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 6661882048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.442 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 21269890 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.442 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.443 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.443 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.443 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.444 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.444 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.444 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.444 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.444 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.447 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T18:45:14.439370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.447 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.450 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T18:45:14.444068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.450 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.451 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.451 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.451 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.451 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.452 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.452 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.452 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.453 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.453 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.453 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.453 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.bytes.delta volume: 3389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.454 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.454 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T18:45:14.453331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.454 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.455 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.455 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.455 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.455 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.455 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T18:45:14.455518) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.456 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.457 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T18:45:14.457110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.457 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.457 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets volume: 52 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.457 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.458 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.458 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.458 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T18:45:14.459130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.460 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.460 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.460 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.460 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.460 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.460 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.461 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T18:45:14.460653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.461 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.461 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.462 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.462 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/cpu volume: 41250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.462 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/cpu volume: 355940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.463 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T18:45:14.462398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.463 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/cpu volume: 36310000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.463 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/cpu volume: 37630000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 systemd[1]: Started libpod-conmon-5d5b6a6e2d40a52afb8a9f5cb1a2220f0674608e3b51330ac0cfe619955df7dd.scope.
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.466 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.466 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.466 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:45:14.466 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:45:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f94a9a33298c5114c4a481cae280c5c58124fb7df3d02596eb06452f95ca8270/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f94a9a33298c5114c4a481cae280c5c58124fb7df3d02596eb06452f95ca8270/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f94a9a33298c5114c4a481cae280c5c58124fb7df3d02596eb06452f95ca8270/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f94a9a33298c5114c4a481cae280c5c58124fb7df3d02596eb06452f95ca8270/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f94a9a33298c5114c4a481cae280c5c58124fb7df3d02596eb06452f95ca8270/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:14 compute-0 podman[423858]: 2025-12-03 18:45:14.52575368 +0000 UTC m=+0.240259540 container init 5d5b6a6e2d40a52afb8a9f5cb1a2220f0674608e3b51330ac0cfe619955df7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:45:14 compute-0 podman[423858]: 2025-12-03 18:45:14.540981872 +0000 UTC m=+0.255487712 container start 5d5b6a6e2d40a52afb8a9f5cb1a2220f0674608e3b51330ac0cfe619955df7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:45:14 compute-0 podman[423858]: 2025-12-03 18:45:14.545279347 +0000 UTC m=+0.259785197 container attach 5d5b6a6e2d40a52afb8a9f5cb1a2220f0674608e3b51330ac0cfe619955df7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 18:45:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 242 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 659 KiB/s wr, 17 op/s
Dec  3 18:45:15 compute-0 happy_bose[423871]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:45:15 compute-0 happy_bose[423871]: --> relative data size: 1.0
Dec  3 18:45:15 compute-0 happy_bose[423871]: --> All data devices are unavailable
Dec  3 18:45:15 compute-0 systemd[1]: libpod-5d5b6a6e2d40a52afb8a9f5cb1a2220f0674608e3b51330ac0cfe619955df7dd.scope: Deactivated successfully.
Dec  3 18:45:15 compute-0 systemd[1]: libpod-5d5b6a6e2d40a52afb8a9f5cb1a2220f0674608e3b51330ac0cfe619955df7dd.scope: Consumed 1.205s CPU time.
Dec  3 18:45:15 compute-0 podman[423902]: 2025-12-03 18:45:15.928008841 +0000 UTC m=+0.047919054 container died 5d5b6a6e2d40a52afb8a9f5cb1a2220f0674608e3b51330ac0cfe619955df7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 18:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f94a9a33298c5114c4a481cae280c5c58124fb7df3d02596eb06452f95ca8270-merged.mount: Deactivated successfully.
Dec  3 18:45:16 compute-0 podman[423902]: 2025-12-03 18:45:16.006004729 +0000 UTC m=+0.125914932 container remove 5d5b6a6e2d40a52afb8a9f5cb1a2220f0674608e3b51330ac0cfe619955df7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:45:16 compute-0 systemd[1]: libpod-conmon-5d5b6a6e2d40a52afb8a9f5cb1a2220f0674608e3b51330ac0cfe619955df7dd.scope: Deactivated successfully.
Dec  3 18:45:16 compute-0 podman[424055]: 2025-12-03 18:45:16.896272462 +0000 UTC m=+0.053600873 container create 395c21943eb890faa4012c55db5973bd3ce4f33cb428e48f9f1dce38e75d5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 18:45:16 compute-0 systemd[1]: Started libpod-conmon-395c21943eb890faa4012c55db5973bd3ce4f33cb428e48f9f1dce38e75d5e99.scope.
Dec  3 18:45:16 compute-0 podman[424055]: 2025-12-03 18:45:16.871866895 +0000 UTC m=+0.029195356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:45:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:45:16 compute-0 podman[424055]: 2025-12-03 18:45:16.997821117 +0000 UTC m=+0.155149518 container init 395c21943eb890faa4012c55db5973bd3ce4f33cb428e48f9f1dce38e75d5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:45:17 compute-0 podman[424055]: 2025-12-03 18:45:17.007554065 +0000 UTC m=+0.164882466 container start 395c21943eb890faa4012c55db5973bd3ce4f33cb428e48f9f1dce38e75d5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 18:45:17 compute-0 podman[424055]: 2025-12-03 18:45:17.012385263 +0000 UTC m=+0.169713674 container attach 395c21943eb890faa4012c55db5973bd3ce4f33cb428e48f9f1dce38e75d5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 18:45:17 compute-0 amazing_turing[424071]: 167 167
Dec  3 18:45:17 compute-0 systemd[1]: libpod-395c21943eb890faa4012c55db5973bd3ce4f33cb428e48f9f1dce38e75d5e99.scope: Deactivated successfully.
Dec  3 18:45:17 compute-0 podman[424055]: 2025-12-03 18:45:17.016212747 +0000 UTC m=+0.173541198 container died 395c21943eb890faa4012c55db5973bd3ce4f33cb428e48f9f1dce38e75d5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-356a1dca4c6158f9c704ce48bacba4341377029bfff1e4533c8979bd8add9917-merged.mount: Deactivated successfully.
Dec  3 18:45:17 compute-0 podman[424055]: 2025-12-03 18:45:17.06419396 +0000 UTC m=+0.221522361 container remove 395c21943eb890faa4012c55db5973bd3ce4f33cb428e48f9f1dce38e75d5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:45:17 compute-0 systemd[1]: libpod-conmon-395c21943eb890faa4012c55db5973bd3ce4f33cb428e48f9f1dce38e75d5e99.scope: Deactivated successfully.
Dec  3 18:45:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 254 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Dec  3 18:45:17 compute-0 podman[424095]: 2025-12-03 18:45:17.28364583 +0000 UTC m=+0.060154233 container create a496779ae3d7015306625148fc2b0466b1d0d13454e0ef55a7c1113fdfe7b981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:45:17 compute-0 systemd[1]: Started libpod-conmon-a496779ae3d7015306625148fc2b0466b1d0d13454e0ef55a7c1113fdfe7b981.scope.
Dec  3 18:45:17 compute-0 podman[424095]: 2025-12-03 18:45:17.256385954 +0000 UTC m=+0.032894387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:45:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a222fdce4745c4f6784f57ef3e2c86346c8a1e168ba6246dde329a5c16de8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a222fdce4745c4f6784f57ef3e2c86346c8a1e168ba6246dde329a5c16de8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a222fdce4745c4f6784f57ef3e2c86346c8a1e168ba6246dde329a5c16de8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a222fdce4745c4f6784f57ef3e2c86346c8a1e168ba6246dde329a5c16de8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:17 compute-0 podman[424095]: 2025-12-03 18:45:17.41851622 +0000 UTC m=+0.195024693 container init a496779ae3d7015306625148fc2b0466b1d0d13454e0ef55a7c1113fdfe7b981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:45:17 compute-0 podman[424095]: 2025-12-03 18:45:17.438779106 +0000 UTC m=+0.215287499 container start a496779ae3d7015306625148fc2b0466b1d0d13454e0ef55a7c1113fdfe7b981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:45:17 compute-0 podman[424095]: 2025-12-03 18:45:17.443394219 +0000 UTC m=+0.219902632 container attach a496779ae3d7015306625148fc2b0466b1d0d13454e0ef55a7c1113fdfe7b981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:45:17 compute-0 nova_compute[348325]: 2025-12-03 18:45:17.651 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]: {
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:    "0": [
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:        {
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "devices": [
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "/dev/loop3"
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            ],
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_name": "ceph_lv0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_size": "21470642176",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "name": "ceph_lv0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "tags": {
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.cluster_name": "ceph",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.crush_device_class": "",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.encrypted": "0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.osd_id": "0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.type": "block",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.vdo": "0"
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            },
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "type": "block",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "vg_name": "ceph_vg0"
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:        }
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:    ],
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:    "1": [
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:        {
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "devices": [
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "/dev/loop4"
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            ],
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_name": "ceph_lv1",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_size": "21470642176",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "name": "ceph_lv1",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "tags": {
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.cluster_name": "ceph",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.crush_device_class": "",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.encrypted": "0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.osd_id": "1",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.type": "block",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.vdo": "0"
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            },
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "type": "block",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "vg_name": "ceph_vg1"
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:        }
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:    ],
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:    "2": [
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:        {
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "devices": [
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "/dev/loop5"
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            ],
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_name": "ceph_lv2",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_size": "21470642176",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "name": "ceph_lv2",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "tags": {
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.cluster_name": "ceph",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.crush_device_class": "",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.encrypted": "0",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.osd_id": "2",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.type": "block",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:                "ceph.vdo": "0"
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            },
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "type": "block",
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:            "vg_name": "ceph_vg2"
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:        }
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]:    ]
Dec  3 18:45:18 compute-0 sleepy_taussig[424112]: }
Dec  3 18:45:18 compute-0 systemd[1]: libpod-a496779ae3d7015306625148fc2b0466b1d0d13454e0ef55a7c1113fdfe7b981.scope: Deactivated successfully.
Dec  3 18:45:18 compute-0 systemd[1]: Starting dnf makecache...
Dec  3 18:45:18 compute-0 podman[424121]: 2025-12-03 18:45:18.41655289 +0000 UTC m=+0.047930803 container died a496779ae3d7015306625148fc2b0466b1d0d13454e0ef55a7c1113fdfe7b981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 18:45:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a222fdce4745c4f6784f57ef3e2c86346c8a1e168ba6246dde329a5c16de8f-merged.mount: Deactivated successfully.
Dec  3 18:45:18 compute-0 nova_compute[348325]: 2025-12-03 18:45:18.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:45:18 compute-0 nova_compute[348325]: 2025-12-03 18:45:18.523 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:45:18 compute-0 nova_compute[348325]: 2025-12-03 18:45:18.524 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:45:18 compute-0 nova_compute[348325]: 2025-12-03 18:45:18.525 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:45:18 compute-0 nova_compute[348325]: 2025-12-03 18:45:18.525 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:45:18 compute-0 nova_compute[348325]: 2025-12-03 18:45:18.526 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:45:18 compute-0 podman[424121]: 2025-12-03 18:45:18.528500089 +0000 UTC m=+0.159877972 container remove a496779ae3d7015306625148fc2b0466b1d0d13454e0ef55a7c1113fdfe7b981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_taussig, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 18:45:18 compute-0 systemd[1]: libpod-conmon-a496779ae3d7015306625148fc2b0466b1d0d13454e0ef55a7c1113fdfe7b981.scope: Deactivated successfully.
Dec  3 18:45:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:18 compute-0 dnf[424122]: Metadata cache refreshed recently.
Dec  3 18:45:18 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  3 18:45:18 compute-0 systemd[1]: Finished dnf makecache.
Dec  3 18:45:18 compute-0 nova_compute[348325]: 2025-12-03 18:45:18.702 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:45:18 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1727412307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:45:18 compute-0 nova_compute[348325]: 2025-12-03 18:45:18.994 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.145 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.146 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.147 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.154 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.154 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.155 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.160 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.160 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.161 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.168 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.169 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.169 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:45:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 262 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 1.4 MiB/s wr, 53 op/s
Dec  3 18:45:19 compute-0 podman[424299]: 2025-12-03 18:45:19.53023316 +0000 UTC m=+0.071680915 container create 2d7b6aa4e0600639911d16979218d22e196aed9e212a41c27c824653fee1eb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:45:19 compute-0 systemd[1]: Started libpod-conmon-2d7b6aa4e0600639911d16979218d22e196aed9e212a41c27c824653fee1eb66.scope.
Dec  3 18:45:19 compute-0 podman[424299]: 2025-12-03 18:45:19.503146867 +0000 UTC m=+0.044594662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.606 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.608 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3226MB free_disk=59.86080551147461GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.609 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.609 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:45:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:45:19 compute-0 podman[424299]: 2025-12-03 18:45:19.64628199 +0000 UTC m=+0.187729795 container init 2d7b6aa4e0600639911d16979218d22e196aed9e212a41c27c824653fee1eb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:45:19 compute-0 podman[424299]: 2025-12-03 18:45:19.655321261 +0000 UTC m=+0.196769016 container start 2d7b6aa4e0600639911d16979218d22e196aed9e212a41c27c824653fee1eb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 18:45:19 compute-0 podman[424299]: 2025-12-03 18:45:19.659731889 +0000 UTC m=+0.201179664 container attach 2d7b6aa4e0600639911d16979218d22e196aed9e212a41c27c824653fee1eb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 18:45:19 compute-0 vibrant_cray[424316]: 167 167
Dec  3 18:45:19 compute-0 systemd[1]: libpod-2d7b6aa4e0600639911d16979218d22e196aed9e212a41c27c824653fee1eb66.scope: Deactivated successfully.
Dec  3 18:45:19 compute-0 conmon[424316]: conmon 2d7b6aa4e0600639911d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d7b6aa4e0600639911d16979218d22e196aed9e212a41c27c824653fee1eb66.scope/container/memory.events
Dec  3 18:45:19 compute-0 podman[424299]: 2025-12-03 18:45:19.670520983 +0000 UTC m=+0.211968748 container died 2d7b6aa4e0600639911d16979218d22e196aed9e212a41c27c824653fee1eb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:45:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-83bb74444511717807a4f907f6e70e4d771fdaaffc84b8626c3b7b5e97a9cf74-merged.mount: Deactivated successfully.
Dec  3 18:45:19 compute-0 podman[424299]: 2025-12-03 18:45:19.720711231 +0000 UTC m=+0.262158986 container remove 2d7b6aa4e0600639911d16979218d22e196aed9e212a41c27c824653fee1eb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.738 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.739 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance df72d527-943e-4e8c-b62a-63afa5f18261 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.740 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance de3992c5-c1ad-4da3-9276-954d6365c3c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.740 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a6019a9c-c065-49d8-bef3-219bd2c79d8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.740 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.740 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:45:19 compute-0 systemd[1]: libpod-conmon-2d7b6aa4e0600639911d16979218d22e196aed9e212a41c27c824653fee1eb66.scope: Deactivated successfully.
Dec  3 18:45:19 compute-0 nova_compute[348325]: 2025-12-03 18:45:19.886 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:45:19 compute-0 podman[424339]: 2025-12-03 18:45:19.965757727 +0000 UTC m=+0.089890591 container create c6e4008c43e2b8e0a156c908225aed50908f98c912451f7215f3a7bd79ed1cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:45:20 compute-0 podman[424339]: 2025-12-03 18:45:19.931074508 +0000 UTC m=+0.055207452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:45:20 compute-0 systemd[1]: Started libpod-conmon-c6e4008c43e2b8e0a156c908225aed50908f98c912451f7215f3a7bd79ed1cf1.scope.
Dec  3 18:45:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76381b62df3f66fc403c0aca6fca9e1a1a4de2b380fe2efa66fb327c2a077a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76381b62df3f66fc403c0aca6fca9e1a1a4de2b380fe2efa66fb327c2a077a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76381b62df3f66fc403c0aca6fca9e1a1a4de2b380fe2efa66fb327c2a077a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76381b62df3f66fc403c0aca6fca9e1a1a4de2b380fe2efa66fb327c2a077a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:45:20 compute-0 podman[424339]: 2025-12-03 18:45:20.109115494 +0000 UTC m=+0.233248388 container init c6e4008c43e2b8e0a156c908225aed50908f98c912451f7215f3a7bd79ed1cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 18:45:20 compute-0 podman[424339]: 2025-12-03 18:45:20.123825405 +0000 UTC m=+0.247958289 container start c6e4008c43e2b8e0a156c908225aed50908f98c912451f7215f3a7bd79ed1cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 18:45:20 compute-0 podman[424339]: 2025-12-03 18:45:20.128652692 +0000 UTC m=+0.252785576 container attach c6e4008c43e2b8e0a156c908225aed50908f98c912451f7215f3a7bd79ed1cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:45:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:45:20 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1866476565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:45:20 compute-0 nova_compute[348325]: 2025-12-03 18:45:20.397 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:45:20 compute-0 nova_compute[348325]: 2025-12-03 18:45:20.407 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:45:20 compute-0 nova_compute[348325]: 2025-12-03 18:45:20.429 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:45:20 compute-0 nova_compute[348325]: 2025-12-03 18:45:20.459 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:45:20 compute-0 nova_compute[348325]: 2025-12-03 18:45:20.459 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:45:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]: {
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "osd_id": 1,
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "type": "bluestore"
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:    },
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "osd_id": 2,
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "type": "bluestore"
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:    },
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "osd_id": 0,
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:        "type": "bluestore"
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]:    }
Dec  3 18:45:21 compute-0 pensive_chebyshev[424375]: }
Dec  3 18:45:21 compute-0 systemd[1]: libpod-c6e4008c43e2b8e0a156c908225aed50908f98c912451f7215f3a7bd79ed1cf1.scope: Deactivated successfully.
Dec  3 18:45:21 compute-0 systemd[1]: libpod-c6e4008c43e2b8e0a156c908225aed50908f98c912451f7215f3a7bd79ed1cf1.scope: Consumed 1.180s CPU time.
Dec  3 18:45:21 compute-0 podman[424339]: 2025-12-03 18:45:21.336970778 +0000 UTC m=+1.461103692 container died c6e4008c43e2b8e0a156c908225aed50908f98c912451f7215f3a7bd79ed1cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d76381b62df3f66fc403c0aca6fca9e1a1a4de2b380fe2efa66fb327c2a077a5-merged.mount: Deactivated successfully.
Dec  3 18:45:21 compute-0 podman[424339]: 2025-12-03 18:45:21.420554223 +0000 UTC m=+1.544687097 container remove c6e4008c43e2b8e0a156c908225aed50908f98c912451f7215f3a7bd79ed1cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chebyshev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:45:21 compute-0 systemd[1]: libpod-conmon-c6e4008c43e2b8e0a156c908225aed50908f98c912451f7215f3a7bd79ed1cf1.scope: Deactivated successfully.
Dec  3 18:45:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:45:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:45:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:45:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:45:21 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b2e290ba-cea6-4b2d-bf59-f1fb56fddb56 does not exist
Dec  3 18:45:21 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev a9cab4fb-3995-40da-b2d7-278fb0189ab3 does not exist
Dec  3 18:45:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:45:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:45:22 compute-0 nova_compute[348325]: 2025-12-03 18:45:22.652 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  3 18:45:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:45:23.340 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:45:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:45:23.341 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:45:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:45:23.342 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:45:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:23 compute-0 nova_compute[348325]: 2025-12-03 18:45:23.706 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022106673666731783 of space, bias 1.0, pg target 0.6632002100019535 quantized to 32 (current 32)
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:45:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:45:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Dec  3 18:45:25 compute-0 podman[424473]: 2025-12-03 18:45:25.906292219 +0000 UTC m=+0.074130774 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec  3 18:45:25 compute-0 podman[424474]: 2025-12-03 18:45:25.921264146 +0000 UTC m=+0.082926150 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:45:25 compute-0 podman[424475]: 2025-12-03 18:45:25.929062017 +0000 UTC m=+0.092354921 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7)
Dec  3 18:45:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 861 KiB/s wr, 40 op/s
Dec  3 18:45:27 compute-0 nova_compute[348325]: 2025-12-03 18:45:27.654 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:27 compute-0 podman[424534]: 2025-12-03 18:45:27.914270472 +0000 UTC m=+0.074505955 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc.)
Dec  3 18:45:27 compute-0 podman[424535]: 2025-12-03 18:45:27.95631224 +0000 UTC m=+0.103324669 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  3 18:45:27 compute-0 podman[424536]: 2025-12-03 18:45:27.961286591 +0000 UTC m=+0.110643388 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 18:45:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:28 compute-0 nova_compute[348325]: 2025-12-03 18:45:28.709 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 476 KiB/s wr, 13 op/s
Dec  3 18:45:29 compute-0 podman[158200]: time="2025-12-03T18:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:45:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:45:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec  3 18:45:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 36 KiB/s wr, 3 op/s
Dec  3 18:45:31 compute-0 openstack_network_exporter[365222]: ERROR   18:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:45:31 compute-0 openstack_network_exporter[365222]: ERROR   18:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:45:31 compute-0 openstack_network_exporter[365222]: ERROR   18:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:45:31 compute-0 openstack_network_exporter[365222]: ERROR   18:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:45:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:45:31 compute-0 openstack_network_exporter[365222]: ERROR   18:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:45:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:45:32 compute-0 nova_compute[348325]: 2025-12-03 18:45:32.656 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  3 18:45:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.628251) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787533628367, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1035, "num_deletes": 250, "total_data_size": 1497818, "memory_usage": 1517672, "flush_reason": "Manual Compaction"}
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787533638248, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 907727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27600, "largest_seqno": 28634, "table_properties": {"data_size": 903744, "index_size": 1635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10605, "raw_average_key_size": 20, "raw_value_size": 895085, "raw_average_value_size": 1748, "num_data_blocks": 74, "num_entries": 512, "num_filter_entries": 512, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764787436, "oldest_key_time": 1764787436, "file_creation_time": 1764787533, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10039 microseconds, and 3600 cpu microseconds.
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.638289) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 907727 bytes OK
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.638305) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.640380) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.640394) EVENT_LOG_v1 {"time_micros": 1764787533640389, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.640408) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1492939, prev total WAL file size 1492939, number of live WAL files 2.
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.641726) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(886KB)], [62(8893KB)]
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787533641788, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10014995, "oldest_snapshot_seqno": -1}
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5084 keys, 7280721 bytes, temperature: kUnknown
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787533682415, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7280721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7248408, "index_size": 18507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 126239, "raw_average_key_size": 24, "raw_value_size": 7158061, "raw_average_value_size": 1407, "num_data_blocks": 771, "num_entries": 5084, "num_filter_entries": 5084, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764787533, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.683039) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7280721 bytes
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.685522) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.9 rd, 177.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(19.1) write-amplify(8.0) OK, records in: 5553, records dropped: 469 output_compression: NoCompression
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.685548) EVENT_LOG_v1 {"time_micros": 1764787533685536, "job": 34, "event": "compaction_finished", "compaction_time_micros": 41067, "compaction_time_cpu_micros": 18403, "output_level": 6, "num_output_files": 1, "total_output_size": 7280721, "num_input_records": 5553, "num_output_records": 5084, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787533687431, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787533690679, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.641343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.691273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.691280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.691282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.691284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:45:33 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:45:33.691286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:45:33 compute-0 nova_compute[348325]: 2025-12-03 18:45:33.712 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 18:45:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:45:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:45:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2779361123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:45:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:45:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2779361123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:45:37 compute-0 nova_compute[348325]: 2025-12-03 18:45:37.660 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:38 compute-0 nova_compute[348325]: 2025-12-03 18:45:38.716 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:45:39 compute-0 podman[424591]: 2025-12-03 18:45:39.469934661 +0000 UTC m=+0.109858560 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:45:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:45:41 compute-0 podman[424616]: 2025-12-03 18:45:41.977193262 +0000 UTC m=+0.130217878 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  3 18:45:42 compute-0 podman[424615]: 2025-12-03 18:45:42.034207146 +0000 UTC m=+0.193424574 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:45:42 compute-0 nova_compute[348325]: 2025-12-03 18:45:42.663 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:45:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:43 compute-0 nova_compute[348325]: 2025-12-03 18:45:43.719 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:45:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:45:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:45:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:45:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:45:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:45:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:45:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:45:47 compute-0 nova_compute[348325]: 2025-12-03 18:45:47.666 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:48 compute-0 nova_compute[348325]: 2025-12-03 18:45:48.723 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec  3 18:45:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  3 18:45:52 compute-0 nova_compute[348325]: 2025-12-03 18:45:52.669 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  3 18:45:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:53 compute-0 nova_compute[348325]: 2025-12-03 18:45:53.729 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  3 18:45:56 compute-0 podman[424660]: 2025-12-03 18:45:56.966289142 +0000 UTC m=+0.118025379 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 18:45:56 compute-0 podman[424662]: 2025-12-03 18:45:56.971327036 +0000 UTC m=+0.096289648 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 18:45:56 compute-0 podman[424661]: 2025-12-03 18:45:56.989014528 +0000 UTC m=+0.137069135 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:45:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  3 18:45:57 compute-0 nova_compute[348325]: 2025-12-03 18:45:57.670 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:45:58 compute-0 nova_compute[348325]: 2025-12-03 18:45:58.734 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:45:58 compute-0 podman[424724]: 2025-12-03 18:45:58.957341909 +0000 UTC m=+0.096662326 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 18:45:58 compute-0 podman[424723]: 2025-12-03 18:45:58.971228199 +0000 UTC m=+0.115905797 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:45:59 compute-0 podman[424722]: 2025-12-03 18:45:59.004484613 +0000 UTC m=+0.159200696 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, name=ubi9, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4)
Dec  3 18:45:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  3 18:45:59 compute-0 podman[158200]: time="2025-12-03T18:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:45:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:45:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Dec  3 18:46:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s wr, 0 op/s
Dec  3 18:46:01 compute-0 openstack_network_exporter[365222]: ERROR   18:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:46:01 compute-0 openstack_network_exporter[365222]: ERROR   18:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:46:01 compute-0 openstack_network_exporter[365222]: ERROR   18:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:46:01 compute-0 openstack_network_exporter[365222]: ERROR   18:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:46:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:46:01 compute-0 openstack_network_exporter[365222]: ERROR   18:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:46:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:46:02 compute-0 nova_compute[348325]: 2025-12-03 18:46:02.672 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:03 compute-0 nova_compute[348325]: 2025-12-03 18:46:03.738 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:07 compute-0 nova_compute[348325]: 2025-12-03 18:46:07.674 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:08 compute-0 nova_compute[348325]: 2025-12-03 18:46:08.450 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:46:08 compute-0 nova_compute[348325]: 2025-12-03 18:46:08.450 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:46:08 compute-0 nova_compute[348325]: 2025-12-03 18:46:08.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:46:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:08 compute-0 nova_compute[348325]: 2025-12-03 18:46:08.742 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Dec  3 18:46:09 compute-0 nova_compute[348325]: 2025-12-03 18:46:09.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:46:09 compute-0 podman[424779]: 2025-12-03 18:46:09.951075187 +0000 UTC m=+0.100758177 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:46:10 compute-0 nova_compute[348325]: 2025-12-03 18:46:10.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:46:10 compute-0 nova_compute[348325]: 2025-12-03 18:46:10.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:46:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 263 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec  3 18:46:11 compute-0 nova_compute[348325]: 2025-12-03 18:46:11.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:46:11 compute-0 nova_compute[348325]: 2025-12-03 18:46:11.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:46:12 compute-0 nova_compute[348325]: 2025-12-03 18:46:12.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:46:12 compute-0 nova_compute[348325]: 2025-12-03 18:46:12.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:46:12 compute-0 nova_compute[348325]: 2025-12-03 18:46:12.676 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:12 compute-0 podman[424803]: 2025-12-03 18:46:12.972724061 +0000 UTC m=+0.104670233 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  3 18:46:13 compute-0 podman[424802]: 2025-12-03 18:46:13.015126108 +0000 UTC m=+0.155044334 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:46:13 compute-0 nova_compute[348325]: 2025-12-03 18:46:13.173 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:46:13 compute-0 nova_compute[348325]: 2025-12-03 18:46:13.174 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:46:13 compute-0 nova_compute[348325]: 2025-12-03 18:46:13.175 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Dec  3 18:46:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:13 compute-0 nova_compute[348325]: 2025-12-03 18:46:13.744 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:46:13
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['vms', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control']
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:46:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:46:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:46:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:46:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:46:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:46:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:46:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:46:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:46:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:46:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:46:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:46:14 compute-0 nova_compute[348325]: 2025-12-03 18:46:14.590 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Updating instance_info_cache with network_info: [{"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:46:14 compute-0 nova_compute[348325]: 2025-12-03 18:46:14.612 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:46:14 compute-0 nova_compute[348325]: 2025-12-03 18:46:14.612 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:46:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec  3 18:46:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:46:17 compute-0 nova_compute[348325]: 2025-12-03 18:46:17.603 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:46:17 compute-0 nova_compute[348325]: 2025-12-03 18:46:17.679 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:18 compute-0 nova_compute[348325]: 2025-12-03 18:46:18.746 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:46:19 compute-0 nova_compute[348325]: 2025-12-03 18:46:19.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:46:19 compute-0 nova_compute[348325]: 2025-12-03 18:46:19.619 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:46:19 compute-0 nova_compute[348325]: 2025-12-03 18:46:19.620 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:46:19 compute-0 nova_compute[348325]: 2025-12-03 18:46:19.621 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:46:19 compute-0 nova_compute[348325]: 2025-12-03 18:46:19.622 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:46:19 compute-0 nova_compute[348325]: 2025-12-03 18:46:19.623 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:46:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:46:20 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3648365402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.129 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.377 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.377 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.377 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.387 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.387 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.388 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.396 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.397 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.397 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.405 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.405 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.406 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.935 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.936 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3229MB free_disk=59.85565948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.936 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:46:20 compute-0 nova_compute[348325]: 2025-12-03 18:46:20.936 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.036 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.037 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance df72d527-943e-4e8c-b62a-63afa5f18261 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.037 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance de3992c5-c1ad-4da3-9276-954d6365c3c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.037 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a6019a9c-c065-49d8-bef3-219bd2c79d8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.038 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.038 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.145 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:46:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.3 KiB/s wr, 59 op/s
Dec  3 18:46:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:46:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/258469478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.606 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.615 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.757 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.760 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:46:21 compute-0 nova_compute[348325]: 2025-12-03 18:46:21.760 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:46:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:46:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:46:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:46:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:46:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:46:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:46:22 compute-0 nova_compute[348325]: 2025-12-03 18:46:22.682 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Dec  3 18:46:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:46:23.341 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:46:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:46:23.342 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:46:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:46:23.344 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:46:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:46:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:46:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:46:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:46:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:46:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:46:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d59fd17d-eff6-417b-9c8c-275466dd0886 does not exist
Dec  3 18:46:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c9138fe1-6767-492d-98b4-8ebfc98d6778 does not exist
Dec  3 18:46:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 8eb65cef-de5a-46c2-9ea2-2768c5d4352f does not exist
Dec  3 18:46:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:46:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:46:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:46:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:46:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:46:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:46:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:46:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:46:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:46:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:23 compute-0 nova_compute[348325]: 2025-12-03 18:46:23.749 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:24 compute-0 podman[425275]: 2025-12-03 18:46:24.153634084 +0000 UTC m=+0.057555450 container create ba643912ed4213188ce722d69fc1de247688c182463db8db77b96d0eafde9161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_solomon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:46:24 compute-0 systemd[1]: Started libpod-conmon-ba643912ed4213188ce722d69fc1de247688c182463db8db77b96d0eafde9161.scope.
Dec  3 18:46:24 compute-0 podman[425275]: 2025-12-03 18:46:24.134049175 +0000 UTC m=+0.037970571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:46:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:46:24 compute-0 podman[425275]: 2025-12-03 18:46:24.260572391 +0000 UTC m=+0.164493847 container init ba643912ed4213188ce722d69fc1de247688c182463db8db77b96d0eafde9161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_solomon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:46:24 compute-0 podman[425275]: 2025-12-03 18:46:24.272233306 +0000 UTC m=+0.176154782 container start ba643912ed4213188ce722d69fc1de247688c182463db8db77b96d0eafde9161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_solomon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:46:24 compute-0 podman[425275]: 2025-12-03 18:46:24.278033877 +0000 UTC m=+0.181955283 container attach ba643912ed4213188ce722d69fc1de247688c182463db8db77b96d0eafde9161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_solomon, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:46:24 compute-0 gifted_solomon[425292]: 167 167
Dec  3 18:46:24 compute-0 systemd[1]: libpod-ba643912ed4213188ce722d69fc1de247688c182463db8db77b96d0eafde9161.scope: Deactivated successfully.
Dec  3 18:46:24 compute-0 podman[425275]: 2025-12-03 18:46:24.281623166 +0000 UTC m=+0.185544552 container died ba643912ed4213188ce722d69fc1de247688c182463db8db77b96d0eafde9161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_solomon, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c0762ac4eaf83ec80fac24bedec8eeb53470ed2d3d09f4dd4e44585f9692b57-merged.mount: Deactivated successfully.
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022107945480888194 of space, bias 1.0, pg target 0.6632383644266459 quantized to 32 (current 32)
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:46:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:46:24 compute-0 podman[425275]: 2025-12-03 18:46:24.351909875 +0000 UTC m=+0.255831281 container remove ba643912ed4213188ce722d69fc1de247688c182463db8db77b96d0eafde9161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_solomon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 18:46:24 compute-0 systemd[1]: libpod-conmon-ba643912ed4213188ce722d69fc1de247688c182463db8db77b96d0eafde9161.scope: Deactivated successfully.
Dec  3 18:46:24 compute-0 podman[425315]: 2025-12-03 18:46:24.605998413 +0000 UTC m=+0.065887383 container create 5853e362d4cd862b7654b00892d9707d9cac6bf9a865b3499cd5741c6af5aaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 18:46:24 compute-0 podman[425315]: 2025-12-03 18:46:24.575516877 +0000 UTC m=+0.035405857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:46:24 compute-0 systemd[1]: Started libpod-conmon-5853e362d4cd862b7654b00892d9707d9cac6bf9a865b3499cd5741c6af5aaae.scope.
Dec  3 18:46:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9b0bf2f2b71fa87d5e9821b70224c2889bf492a635a05377f993fea5e5b2f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9b0bf2f2b71fa87d5e9821b70224c2889bf492a635a05377f993fea5e5b2f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9b0bf2f2b71fa87d5e9821b70224c2889bf492a635a05377f993fea5e5b2f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9b0bf2f2b71fa87d5e9821b70224c2889bf492a635a05377f993fea5e5b2f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9b0bf2f2b71fa87d5e9821b70224c2889bf492a635a05377f993fea5e5b2f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:24 compute-0 podman[425315]: 2025-12-03 18:46:24.782694686 +0000 UTC m=+0.242583716 container init 5853e362d4cd862b7654b00892d9707d9cac6bf9a865b3499cd5741c6af5aaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:46:24 compute-0 podman[425315]: 2025-12-03 18:46:24.795773526 +0000 UTC m=+0.255662496 container start 5853e362d4cd862b7654b00892d9707d9cac6bf9a865b3499cd5741c6af5aaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:46:24 compute-0 podman[425315]: 2025-12-03 18:46:24.801021784 +0000 UTC m=+0.260910754 container attach 5853e362d4cd862b7654b00892d9707d9cac6bf9a865b3499cd5741c6af5aaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:46:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 5.3 KiB/s wr, 22 op/s
Dec  3 18:46:26 compute-0 lucid_raman[425331]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:46:26 compute-0 lucid_raman[425331]: --> relative data size: 1.0
Dec  3 18:46:26 compute-0 lucid_raman[425331]: --> All data devices are unavailable
Dec  3 18:46:26 compute-0 systemd[1]: libpod-5853e362d4cd862b7654b00892d9707d9cac6bf9a865b3499cd5741c6af5aaae.scope: Deactivated successfully.
Dec  3 18:46:26 compute-0 podman[425315]: 2025-12-03 18:46:26.035414768 +0000 UTC m=+1.495303718 container died 5853e362d4cd862b7654b00892d9707d9cac6bf9a865b3499cd5741c6af5aaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:46:26 compute-0 systemd[1]: libpod-5853e362d4cd862b7654b00892d9707d9cac6bf9a865b3499cd5741c6af5aaae.scope: Consumed 1.143s CPU time.
Dec  3 18:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d9b0bf2f2b71fa87d5e9821b70224c2889bf492a635a05377f993fea5e5b2f4-merged.mount: Deactivated successfully.
Dec  3 18:46:26 compute-0 podman[425315]: 2025-12-03 18:46:26.097102217 +0000 UTC m=+1.556991167 container remove 5853e362d4cd862b7654b00892d9707d9cac6bf9a865b3499cd5741c6af5aaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:46:26 compute-0 systemd[1]: libpod-conmon-5853e362d4cd862b7654b00892d9707d9cac6bf9a865b3499cd5741c6af5aaae.scope: Deactivated successfully.
Dec  3 18:46:26 compute-0 podman[425513]: 2025-12-03 18:46:26.979391475 +0000 UTC m=+0.063264769 container create 2f7dcf75417701297927ebe1b52e67ca1990704dd4f3bc881540e00d8b0b6c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nightingale, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:46:27 compute-0 systemd[1]: Started libpod-conmon-2f7dcf75417701297927ebe1b52e67ca1990704dd4f3bc881540e00d8b0b6c7a.scope.
Dec  3 18:46:27 compute-0 podman[425513]: 2025-12-03 18:46:26.952196749 +0000 UTC m=+0.036070063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:46:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:46:27 compute-0 podman[425513]: 2025-12-03 18:46:27.071046007 +0000 UTC m=+0.154919321 container init 2f7dcf75417701297927ebe1b52e67ca1990704dd4f3bc881540e00d8b0b6c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nightingale, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:46:27 compute-0 podman[425513]: 2025-12-03 18:46:27.079658928 +0000 UTC m=+0.163532222 container start 2f7dcf75417701297927ebe1b52e67ca1990704dd4f3bc881540e00d8b0b6c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nightingale, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:46:27 compute-0 podman[425513]: 2025-12-03 18:46:27.083866131 +0000 UTC m=+0.167739425 container attach 2f7dcf75417701297927ebe1b52e67ca1990704dd4f3bc881540e00d8b0b6c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:46:27 compute-0 wizardly_nightingale[425531]: 167 167
Dec  3 18:46:27 compute-0 systemd[1]: libpod-2f7dcf75417701297927ebe1b52e67ca1990704dd4f3bc881540e00d8b0b6c7a.scope: Deactivated successfully.
Dec  3 18:46:27 compute-0 podman[425513]: 2025-12-03 18:46:27.090263038 +0000 UTC m=+0.174136352 container died 2f7dcf75417701297927ebe1b52e67ca1990704dd4f3bc881540e00d8b0b6c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nightingale, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:46:27 compute-0 podman[425526]: 2025-12-03 18:46:27.128217266 +0000 UTC m=+0.100778046 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 18:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0141815957bfea8894a63b2fe76d0ca315a9302871ba95e3b558347a85199bb0-merged.mount: Deactivated successfully.
Dec  3 18:46:27 compute-0 podman[425529]: 2025-12-03 18:46:27.142314071 +0000 UTC m=+0.113274492 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec  3 18:46:27 compute-0 podman[425513]: 2025-12-03 18:46:27.152743387 +0000 UTC m=+0.236616671 container remove 2f7dcf75417701297927ebe1b52e67ca1990704dd4f3bc881540e00d8b0b6c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nightingale, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:46:27 compute-0 systemd[1]: libpod-conmon-2f7dcf75417701297927ebe1b52e67ca1990704dd4f3bc881540e00d8b0b6c7a.scope: Deactivated successfully.
Dec  3 18:46:27 compute-0 podman[425530]: 2025-12-03 18:46:27.16310263 +0000 UTC m=+0.127160292 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:46:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 5.3 KiB/s wr, 2 op/s
Dec  3 18:46:27 compute-0 podman[425614]: 2025-12-03 18:46:27.373933709 +0000 UTC m=+0.063564597 container create c177c8e4fe7fad5a39d177df0f50ca8a4d436162397f460a34b656621f0c16fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 18:46:27 compute-0 systemd[1]: Started libpod-conmon-c177c8e4fe7fad5a39d177df0f50ca8a4d436162397f460a34b656621f0c16fa.scope.
Dec  3 18:46:27 compute-0 podman[425614]: 2025-12-03 18:46:27.352794101 +0000 UTC m=+0.042425019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:46:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b019d27a30eed7259b0ee3ab894d9572d09b6d41d7be06ec8b867ed2a75a7905/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b019d27a30eed7259b0ee3ab894d9572d09b6d41d7be06ec8b867ed2a75a7905/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b019d27a30eed7259b0ee3ab894d9572d09b6d41d7be06ec8b867ed2a75a7905/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b019d27a30eed7259b0ee3ab894d9572d09b6d41d7be06ec8b867ed2a75a7905/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:27 compute-0 podman[425614]: 2025-12-03 18:46:27.507793054 +0000 UTC m=+0.197423952 container init c177c8e4fe7fad5a39d177df0f50ca8a4d436162397f460a34b656621f0c16fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:46:27 compute-0 podman[425614]: 2025-12-03 18:46:27.524101313 +0000 UTC m=+0.213732201 container start c177c8e4fe7fad5a39d177df0f50ca8a4d436162397f460a34b656621f0c16fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 18:46:27 compute-0 podman[425614]: 2025-12-03 18:46:27.528275445 +0000 UTC m=+0.217906323 container attach c177c8e4fe7fad5a39d177df0f50ca8a4d436162397f460a34b656621f0c16fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:46:27 compute-0 nova_compute[348325]: 2025-12-03 18:46:27.685 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:28 compute-0 strange_allen[425630]: {
Dec  3 18:46:28 compute-0 strange_allen[425630]:    "0": [
Dec  3 18:46:28 compute-0 strange_allen[425630]:        {
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "devices": [
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "/dev/loop3"
Dec  3 18:46:28 compute-0 strange_allen[425630]:            ],
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_name": "ceph_lv0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_size": "21470642176",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "name": "ceph_lv0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "tags": {
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.cluster_name": "ceph",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.crush_device_class": "",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.encrypted": "0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.osd_id": "0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.type": "block",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.vdo": "0"
Dec  3 18:46:28 compute-0 strange_allen[425630]:            },
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "type": "block",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "vg_name": "ceph_vg0"
Dec  3 18:46:28 compute-0 strange_allen[425630]:        }
Dec  3 18:46:28 compute-0 strange_allen[425630]:    ],
Dec  3 18:46:28 compute-0 strange_allen[425630]:    "1": [
Dec  3 18:46:28 compute-0 strange_allen[425630]:        {
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "devices": [
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "/dev/loop4"
Dec  3 18:46:28 compute-0 strange_allen[425630]:            ],
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_name": "ceph_lv1",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_size": "21470642176",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "name": "ceph_lv1",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "tags": {
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.cluster_name": "ceph",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.crush_device_class": "",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.encrypted": "0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.osd_id": "1",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.type": "block",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.vdo": "0"
Dec  3 18:46:28 compute-0 strange_allen[425630]:            },
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "type": "block",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "vg_name": "ceph_vg1"
Dec  3 18:46:28 compute-0 strange_allen[425630]:        }
Dec  3 18:46:28 compute-0 strange_allen[425630]:    ],
Dec  3 18:46:28 compute-0 strange_allen[425630]:    "2": [
Dec  3 18:46:28 compute-0 strange_allen[425630]:        {
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "devices": [
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "/dev/loop5"
Dec  3 18:46:28 compute-0 strange_allen[425630]:            ],
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_name": "ceph_lv2",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_size": "21470642176",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "name": "ceph_lv2",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "tags": {
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.cluster_name": "ceph",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.crush_device_class": "",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.encrypted": "0",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.osd_id": "2",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.type": "block",
Dec  3 18:46:28 compute-0 strange_allen[425630]:                "ceph.vdo": "0"
Dec  3 18:46:28 compute-0 strange_allen[425630]:            },
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "type": "block",
Dec  3 18:46:28 compute-0 strange_allen[425630]:            "vg_name": "ceph_vg2"
Dec  3 18:46:28 compute-0 strange_allen[425630]:        }
Dec  3 18:46:28 compute-0 strange_allen[425630]:    ]
Dec  3 18:46:28 compute-0 strange_allen[425630]: }
Dec  3 18:46:28 compute-0 systemd[1]: libpod-c177c8e4fe7fad5a39d177df0f50ca8a4d436162397f460a34b656621f0c16fa.scope: Deactivated successfully.
Dec  3 18:46:28 compute-0 podman[425614]: 2025-12-03 18:46:28.429413454 +0000 UTC m=+1.119044362 container died c177c8e4fe7fad5a39d177df0f50ca8a4d436162397f460a34b656621f0c16fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:46:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b019d27a30eed7259b0ee3ab894d9572d09b6d41d7be06ec8b867ed2a75a7905-merged.mount: Deactivated successfully.
Dec  3 18:46:28 compute-0 podman[425614]: 2025-12-03 18:46:28.516139636 +0000 UTC m=+1.205770534 container remove c177c8e4fe7fad5a39d177df0f50ca8a4d436162397f460a34b656621f0c16fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:46:28 compute-0 systemd[1]: libpod-conmon-c177c8e4fe7fad5a39d177df0f50ca8a4d436162397f460a34b656621f0c16fa.scope: Deactivated successfully.
Dec  3 18:46:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:28 compute-0 nova_compute[348325]: 2025-12-03 18:46:28.753 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:29 compute-0 podman[425750]: 2025-12-03 18:46:29.088434379 +0000 UTC m=+0.077171659 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 18:46:29 compute-0 podman[425751]: 2025-12-03 18:46:29.13220387 +0000 UTC m=+0.107993683 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 18:46:29 compute-0 podman[425784]: 2025-12-03 18:46:29.205368751 +0000 UTC m=+0.082825009 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, release-0.7.12=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vendor=Red Hat, Inc., version=9.4, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 18:46:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec  3 18:46:29 compute-0 podman[425843]: 2025-12-03 18:46:29.464579433 +0000 UTC m=+0.066256823 container create aad95fd003c276262fd12f589cacee60f6ddf7ecf51f21097fcdfb6dfabe8bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:46:29 compute-0 systemd[1]: Started libpod-conmon-aad95fd003c276262fd12f589cacee60f6ddf7ecf51f21097fcdfb6dfabe8bcd.scope.
Dec  3 18:46:29 compute-0 podman[425843]: 2025-12-03 18:46:29.438043374 +0000 UTC m=+0.039720804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:46:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:46:29 compute-0 podman[425843]: 2025-12-03 18:46:29.617875373 +0000 UTC m=+0.219552783 container init aad95fd003c276262fd12f589cacee60f6ddf7ecf51f21097fcdfb6dfabe8bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:46:29 compute-0 podman[425843]: 2025-12-03 18:46:29.627671143 +0000 UTC m=+0.229348523 container start aad95fd003c276262fd12f589cacee60f6ddf7ecf51f21097fcdfb6dfabe8bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:46:29 compute-0 podman[425843]: 2025-12-03 18:46:29.631147189 +0000 UTC m=+0.232824599 container attach aad95fd003c276262fd12f589cacee60f6ddf7ecf51f21097fcdfb6dfabe8bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:46:29 compute-0 musing_golick[425858]: 167 167
Dec  3 18:46:29 compute-0 systemd[1]: libpod-aad95fd003c276262fd12f589cacee60f6ddf7ecf51f21097fcdfb6dfabe8bcd.scope: Deactivated successfully.
Dec  3 18:46:29 compute-0 conmon[425858]: conmon aad95fd003c276262fd1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aad95fd003c276262fd12f589cacee60f6ddf7ecf51f21097fcdfb6dfabe8bcd.scope/container/memory.events
Dec  3 18:46:29 compute-0 podman[425843]: 2025-12-03 18:46:29.637423072 +0000 UTC m=+0.239100462 container died aad95fd003c276262fd12f589cacee60f6ddf7ecf51f21097fcdfb6dfabe8bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:46:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d7126e232a4e3c70d31913ad94650990e43d7bb71d160e11f1c46d97713088b-merged.mount: Deactivated successfully.
Dec  3 18:46:29 compute-0 podman[425843]: 2025-12-03 18:46:29.725832965 +0000 UTC m=+0.327510355 container remove aad95fd003c276262fd12f589cacee60f6ddf7ecf51f21097fcdfb6dfabe8bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:46:29 compute-0 podman[158200]: time="2025-12-03T18:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:46:29 compute-0 systemd[1]: libpod-conmon-aad95fd003c276262fd12f589cacee60f6ddf7ecf51f21097fcdfb6dfabe8bcd.scope: Deactivated successfully.
Dec  3 18:46:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:46:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Dec  3 18:46:29 compute-0 podman[425881]: 2025-12-03 18:46:29.982824263 +0000 UTC m=+0.062737766 container create ab541b1f1d52c8fba4eebe9cf6ee9b1e5ad38340b81ead9cde542cbbe1a46a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:46:30 compute-0 systemd[1]: Started libpod-conmon-ab541b1f1d52c8fba4eebe9cf6ee9b1e5ad38340b81ead9cde542cbbe1a46a56.scope.
Dec  3 18:46:30 compute-0 podman[425881]: 2025-12-03 18:46:29.959731498 +0000 UTC m=+0.039645021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:46:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6278ce381b30da7d86974e366e8354ff23eb5cd9cd5907fb01b7a69835cc983/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6278ce381b30da7d86974e366e8354ff23eb5cd9cd5907fb01b7a69835cc983/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6278ce381b30da7d86974e366e8354ff23eb5cd9cd5907fb01b7a69835cc983/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6278ce381b30da7d86974e366e8354ff23eb5cd9cd5907fb01b7a69835cc983/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:46:30 compute-0 podman[425881]: 2025-12-03 18:46:30.115309094 +0000 UTC m=+0.195222627 container init ab541b1f1d52c8fba4eebe9cf6ee9b1e5ad38340b81ead9cde542cbbe1a46a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:46:30 compute-0 podman[425881]: 2025-12-03 18:46:30.135795186 +0000 UTC m=+0.215708689 container start ab541b1f1d52c8fba4eebe9cf6ee9b1e5ad38340b81ead9cde542cbbe1a46a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:46:30 compute-0 podman[425881]: 2025-12-03 18:46:30.140196324 +0000 UTC m=+0.220109837 container attach ab541b1f1d52c8fba4eebe9cf6ee9b1e5ad38340b81ead9cde542cbbe1a46a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:46:31 compute-0 crazy_darwin[425897]: {
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "osd_id": 1,
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "type": "bluestore"
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:    },
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "osd_id": 2,
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "type": "bluestore"
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:    },
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "osd_id": 0,
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:        "type": "bluestore"
Dec  3 18:46:31 compute-0 crazy_darwin[425897]:    }
Dec  3 18:46:31 compute-0 crazy_darwin[425897]: }
Dec  3 18:46:31 compute-0 podman[425881]: 2025-12-03 18:46:31.174170664 +0000 UTC m=+1.254084167 container died ab541b1f1d52c8fba4eebe9cf6ee9b1e5ad38340b81ead9cde542cbbe1a46a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:46:31 compute-0 systemd[1]: libpod-ab541b1f1d52c8fba4eebe9cf6ee9b1e5ad38340b81ead9cde542cbbe1a46a56.scope: Deactivated successfully.
Dec  3 18:46:31 compute-0 systemd[1]: libpod-ab541b1f1d52c8fba4eebe9cf6ee9b1e5ad38340b81ead9cde542cbbe1a46a56.scope: Consumed 1.032s CPU time.
Dec  3 18:46:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6278ce381b30da7d86974e366e8354ff23eb5cd9cd5907fb01b7a69835cc983-merged.mount: Deactivated successfully.
Dec  3 18:46:31 compute-0 podman[425881]: 2025-12-03 18:46:31.244061374 +0000 UTC m=+1.323974867 container remove ab541b1f1d52c8fba4eebe9cf6ee9b1e5ad38340b81ead9cde542cbbe1a46a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:46:31 compute-0 systemd[1]: libpod-conmon-ab541b1f1d52c8fba4eebe9cf6ee9b1e5ad38340b81ead9cde542cbbe1a46a56.scope: Deactivated successfully.
Dec  3 18:46:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:46:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:46:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:46:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:46:31 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 22b390a4-4987-44bd-bee0-6519baf3e0bc does not exist
Dec  3 18:46:31 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3f56a38f-46fb-44f3-b1e7-c251dd51f5d5 does not exist
Dec  3 18:46:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec  3 18:46:31 compute-0 openstack_network_exporter[365222]: ERROR   18:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:46:31 compute-0 openstack_network_exporter[365222]: ERROR   18:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:46:31 compute-0 openstack_network_exporter[365222]: ERROR   18:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:46:31 compute-0 openstack_network_exporter[365222]: ERROR   18:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:46:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:46:31 compute-0 openstack_network_exporter[365222]: ERROR   18:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:46:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:46:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:46:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:46:32 compute-0 nova_compute[348325]: 2025-12-03 18:46:32.687 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:33 compute-0 nova_compute[348325]: 2025-12-03 18:46:33.756 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:34 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 18:46:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:35 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 18:46:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:46:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1814478149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:46:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:46:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1814478149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:46:37 compute-0 nova_compute[348325]: 2025-12-03 18:46:37.689 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:38 compute-0 nova_compute[348325]: 2025-12-03 18:46:38.759 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:40 compute-0 podman[425995]: 2025-12-03 18:46:40.965113071 +0000 UTC m=+0.112903864 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:46:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:42 compute-0 nova_compute[348325]: 2025-12-03 18:46:42.692 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:43 compute-0 nova_compute[348325]: 2025-12-03 18:46:43.763 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:43 compute-0 podman[426020]: 2025-12-03 18:46:43.954369893 +0000 UTC m=+0.096066242 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  3 18:46:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:46:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:46:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:46:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:46:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:46:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:46:43 compute-0 podman[426019]: 2025-12-03 18:46:43.993628173 +0000 UTC m=+0.137863954 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 18:46:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:47 compute-0 nova_compute[348325]: 2025-12-03 18:46:47.695 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:48 compute-0 nova_compute[348325]: 2025-12-03 18:46:48.767 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:52 compute-0 nova_compute[348325]: 2025-12-03 18:46:52.697 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:53 compute-0 nova_compute[348325]: 2025-12-03 18:46:53.770 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:57 compute-0 nova_compute[348325]: 2025-12-03 18:46:57.699 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:57 compute-0 podman[426062]: 2025-12-03 18:46:57.940671424 +0000 UTC m=+0.100338937 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:46:57 compute-0 podman[426064]: 2025-12-03 18:46:57.962259622 +0000 UTC m=+0.116857870 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal)
Dec  3 18:46:57 compute-0 podman[426063]: 2025-12-03 18:46:57.978369566 +0000 UTC m=+0.127624184 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:46:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:46:58 compute-0 nova_compute[348325]: 2025-12-03 18:46:58.775 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:46:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:46:59 compute-0 podman[158200]: time="2025-12-03T18:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:46:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:46:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Dec  3 18:46:59 compute-0 podman[426127]: 2025-12-03 18:46:59.938391498 +0000 UTC m=+0.088086057 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 18:46:59 compute-0 podman[426125]: 2025-12-03 18:46:59.941053072 +0000 UTC m=+0.100654963 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, release=1214.1726694543, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4)
Dec  3 18:46:59 compute-0 podman[426126]: 2025-12-03 18:46:59.945634215 +0000 UTC m=+0.098542743 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:47:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:01 compute-0 openstack_network_exporter[365222]: ERROR   18:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:47:01 compute-0 openstack_network_exporter[365222]: ERROR   18:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:47:01 compute-0 openstack_network_exporter[365222]: ERROR   18:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:47:01 compute-0 openstack_network_exporter[365222]: ERROR   18:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:47:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:47:01 compute-0 openstack_network_exporter[365222]: ERROR   18:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:47:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:47:02 compute-0 nova_compute[348325]: 2025-12-03 18:47:02.701 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:03 compute-0 nova_compute[348325]: 2025-12-03 18:47:03.778 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:07 compute-0 nova_compute[348325]: 2025-12-03 18:47:07.705 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:08 compute-0 nova_compute[348325]: 2025-12-03 18:47:08.752 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:47:08 compute-0 nova_compute[348325]: 2025-12-03 18:47:08.753 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:47:08 compute-0 nova_compute[348325]: 2025-12-03 18:47:08.783 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:10 compute-0 nova_compute[348325]: 2025-12-03 18:47:10.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:47:10 compute-0 nova_compute[348325]: 2025-12-03 18:47:10.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:47:10 compute-0 nova_compute[348325]: 2025-12-03 18:47:10.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:47:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:11 compute-0 podman[426181]: 2025-12-03 18:47:11.929205732 +0000 UTC m=+0.080138502 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:47:12 compute-0 nova_compute[348325]: 2025-12-03 18:47:12.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:47:12 compute-0 nova_compute[348325]: 2025-12-03 18:47:12.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:47:12 compute-0 nova_compute[348325]: 2025-12-03 18:47:12.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:47:12 compute-0 nova_compute[348325]: 2025-12-03 18:47:12.707 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:13 compute-0 nova_compute[348325]: 2025-12-03 18:47:13.188 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:47:13 compute-0 nova_compute[348325]: 2025-12-03 18:47:13.189 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:47:13 compute-0 nova_compute[348325]: 2025-12-03 18:47:13.190 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:47:13 compute-0 nova_compute[348325]: 2025-12-03 18:47:13.191 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ca1fbdb-089c-4544-821e-0542089b8424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.249 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.250 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.250 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.260 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1ca1fbdb-089c-4544-821e-0542089b8424', 'name': 'test_0', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.264 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'df72d527-943e-4e8c-b62a-63afa5f18261', 'name': 'vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.269 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6019a9c-c065-49d8-bef3-219bd2c79d8c', 'name': 'vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.273 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'de3992c5-c1ad-4da3-9276-954d6365c3c9', 'name': 'vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.274 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T18:47:13.275091) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.281 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.286 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.291 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.295 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.296 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.296 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T18:47:13.296880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.297 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.297 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.bytes volume: 7216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.297 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.298 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.298 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.299 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.299 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets volume: 60 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T18:47:13.299196) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.300 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.300 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.301 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.301 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.302 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T18:47:13.301603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.302 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.bytes.delta volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.302 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.303 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.304 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.304 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.304 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.bytes volume: 8280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.305 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.305 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T18:47:13.304293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.306 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T18:47:13.306767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.335 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.335 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.336 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.365 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.366 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.366 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.397 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.397 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.398 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.430 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.431 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.432 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.433 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.433 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.433 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.434 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.435 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.435 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.436 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T18:47:13.434172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.437 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.437 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T18:47:13.438113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.469 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.506 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.539 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.572 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/memory.usage volume: 49.109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.575 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.575 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.576 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T18:47:13.574918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.578 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.580 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.581 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.581 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T18:47:13.580713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.584 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.585 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.586 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.587 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.588 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.588 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.589 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.590 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.590 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.591 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.593 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T18:47:13.594180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.661 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.662 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.662 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.753 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.754 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.755 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 nova_compute[348325]: 2025-12-03 18:47:13.786 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.853 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.854 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.854 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.923 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.924 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.924 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.925 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.925 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.926 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.926 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.926 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.926 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.926 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.926 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.926 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 1682579508 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.927 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 260360075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.927 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 147233249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.927 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 1698039964 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.928 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 224294548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.928 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.latency volume: 159520694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.928 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 1330892351 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.929 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 190600353 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.929 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 156629474 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.929 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 1270610173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.930 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 182054323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.930 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 131449970 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.931 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.931 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T18:47:13.926673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T18:47:13.932354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.932 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.932 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.932 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.933 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.933 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.933 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.934 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.934 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.934 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.935 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.935 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.935 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.936 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.936 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.937 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.937 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.937 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.937 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.937 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.937 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.938 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T18:47:13.937618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.938 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.938 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.938 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.939 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.939 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.940 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.940 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.940 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.941 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.941 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.941 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.942 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.943 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.943 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.943 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.944 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.944 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.945 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.945 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:47:13
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.945 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'images', '.mgr', 'vms']
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T18:47:13.943337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.946 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.946 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.946 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.947 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.947 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.947 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.948 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.948 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.949 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.949 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.949 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.949 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.950 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.950 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T18:47:13.949392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.950 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.951 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.951 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.951 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.951 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.952 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T18:47:13.952086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.952 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 6303799002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.953 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 23959545 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.953 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.953 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 10024888984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.954 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 29522381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.954 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.955 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 6164702929 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.955 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 24431067 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.955 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.956 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 6661882048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.956 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 21269890 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.956 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.958 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.958 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.958 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.959 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T18:47:13.959106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.959 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.960 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.960 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.960 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.960 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.961 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.961 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.962 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.962 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.962 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.963 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.963 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.964 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.964 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.964 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.964 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.965 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T18:47:13.964878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.965 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.965 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.966 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.bytes.delta volume: 1396 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.966 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.967 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.967 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T18:47:13.968267) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.970 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.970 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.971 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.971 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.971 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.972 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T18:47:13.971398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.972 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:47:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.972 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets volume: 52 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.973 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.973 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.973 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.974 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.974 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.975 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.976 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T18:47:13.974956) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.978 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.978 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.978 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.979 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T18:47:13.978415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.978 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.979 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.980 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.981 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.981 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.982 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.983 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.983 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T18:47:13.983658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.984 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/cpu volume: 42880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.984 14 DEBUG ceilometer.compute.pollsters [-] df72d527-943e-4e8c-b62a-63afa5f18261/cpu volume: 357550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.985 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/cpu volume: 38810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.985 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/cpu volume: 39280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:47:13.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:47:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:47:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:47:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:47:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:47:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:47:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:47:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:47:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:47:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:47:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:47:14 compute-0 podman[426205]: 2025-12-03 18:47:14.781561534 +0000 UTC m=+0.089369348 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  3 18:47:14 compute-0 podman[426204]: 2025-12-03 18:47:14.849433605 +0000 UTC m=+0.152486923 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 18:47:15 compute-0 nova_compute[348325]: 2025-12-03 18:47:15.206 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:47:15 compute-0 nova_compute[348325]: 2025-12-03 18:47:15.228 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:47:15 compute-0 nova_compute[348325]: 2025-12-03 18:47:15.228 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:47:15 compute-0 nova_compute[348325]: 2025-12-03 18:47:15.229 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:47:15 compute-0 nova_compute[348325]: 2025-12-03 18:47:15.229 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:47:15 compute-0 nova_compute[348325]: 2025-12-03 18:47:15.230 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:47:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:17 compute-0 nova_compute[348325]: 2025-12-03 18:47:17.711 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:18 compute-0 nova_compute[348325]: 2025-12-03 18:47:18.790 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:20 compute-0 nova_compute[348325]: 2025-12-03 18:47:20.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:47:20 compute-0 nova_compute[348325]: 2025-12-03 18:47:20.571 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:47:20 compute-0 nova_compute[348325]: 2025-12-03 18:47:20.572 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:47:20 compute-0 nova_compute[348325]: 2025-12-03 18:47:20.572 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:47:20 compute-0 nova_compute[348325]: 2025-12-03 18:47:20.573 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:47:20 compute-0 nova_compute[348325]: 2025-12-03 18:47:20.573 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:47:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:47:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118687050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.081 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.216 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.217 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.217 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.222 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.222 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.222 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.227 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.227 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.228 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.231 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.232 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.232 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:47:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.658 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.659 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3225MB free_disk=59.85565948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.660 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.660 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.787 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.788 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance df72d527-943e-4e8c-b62a-63afa5f18261 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.788 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance de3992c5-c1ad-4da3-9276-954d6365c3c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.788 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a6019a9c-c065-49d8-bef3-219bd2c79d8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.789 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.789 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.812 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing inventories for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.844 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating ProviderTree inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.845 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.865 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing aggregate associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 18:47:21 compute-0 nova_compute[348325]: 2025-12-03 18:47:21.899 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing trait associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, traits: COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_AVX2,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 18:47:22 compute-0 nova_compute[348325]: 2025-12-03 18:47:22.012 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:47:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:47:22 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257793530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:47:22 compute-0 nova_compute[348325]: 2025-12-03 18:47:22.527 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:47:22 compute-0 nova_compute[348325]: 2025-12-03 18:47:22.536 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:47:22 compute-0 nova_compute[348325]: 2025-12-03 18:47:22.557 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:47:22 compute-0 nova_compute[348325]: 2025-12-03 18:47:22.558 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:47:22 compute-0 nova_compute[348325]: 2025-12-03 18:47:22.559 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:47:22 compute-0 nova_compute[348325]: 2025-12-03 18:47:22.713 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:47:23.342 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:47:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:47:23.343 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:47:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:47:23.344 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:47:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:23 compute-0 nova_compute[348325]: 2025-12-03 18:47:23.796 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022107945480888194 of space, bias 1.0, pg target 0.6632383644266459 quantized to 32 (current 32)
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:47:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:47:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:27 compute-0 nova_compute[348325]: 2025-12-03 18:47:27.718 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:28 compute-0 nova_compute[348325]: 2025-12-03 18:47:28.799 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:28 compute-0 podman[426294]: 2025-12-03 18:47:28.937003971 +0000 UTC m=+0.096268096 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 18:47:28 compute-0 podman[426295]: 2025-12-03 18:47:28.939389509 +0000 UTC m=+0.091374657 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:47:28 compute-0 podman[426296]: 2025-12-03 18:47:28.946267177 +0000 UTC m=+0.108643458 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, architecture=x86_64, release=1755695350, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec  3 18:47:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:29 compute-0 podman[158200]: time="2025-12-03T18:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:47:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:47:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8646 "" "Go-http-client/1.1"
Dec  3 18:47:30 compute-0 podman[426353]: 2025-12-03 18:47:30.947356581 +0000 UTC m=+0.090306471 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 18:47:30 compute-0 podman[426351]: 2025-12-03 18:47:30.96164566 +0000 UTC m=+0.121814321 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, name=ubi9, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 18:47:30 compute-0 podman[426352]: 2025-12-03 18:47:30.969820231 +0000 UTC m=+0.125892182 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  3 18:47:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:31 compute-0 openstack_network_exporter[365222]: ERROR   18:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:47:31 compute-0 openstack_network_exporter[365222]: ERROR   18:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:47:31 compute-0 openstack_network_exporter[365222]: ERROR   18:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:47:31 compute-0 openstack_network_exporter[365222]: ERROR   18:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:47:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:47:31 compute-0 openstack_network_exporter[365222]: ERROR   18:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:47:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:47:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 18:47:32 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:47:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:47:32 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:47:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:47:32 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:47:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:47:32 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:47:32 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 0eb6e26c-81fb-4957-9331-6a0aec7ebb1c does not exist
Dec  3 18:47:32 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b3c04db7-7534-4315-ae1e-64a0474a1459 does not exist
Dec  3 18:47:32 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e1f589c8-2a77-4ec9-8933-e7dc9a842028 does not exist
Dec  3 18:47:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:47:32 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:47:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:47:32 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:47:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:47:32 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:47:32 compute-0 nova_compute[348325]: 2025-12-03 18:47:32.717 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:33 compute-0 podman[426672]: 2025-12-03 18:47:33.411530634 +0000 UTC m=+0.084627981 container create 207160be617639b53deeca094c3ff8c33ac68b62005ca11efb8d535ab2a36601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rosalind, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:47:33 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:47:33 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:47:33 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:47:33 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:47:33 compute-0 podman[426672]: 2025-12-03 18:47:33.377943723 +0000 UTC m=+0.051041080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:47:33 compute-0 systemd[1]: Started libpod-conmon-207160be617639b53deeca094c3ff8c33ac68b62005ca11efb8d535ab2a36601.scope.
Dec  3 18:47:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:47:33 compute-0 podman[426672]: 2025-12-03 18:47:33.55315408 +0000 UTC m=+0.226251407 container init 207160be617639b53deeca094c3ff8c33ac68b62005ca11efb8d535ab2a36601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rosalind, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:47:33 compute-0 podman[426672]: 2025-12-03 18:47:33.563655986 +0000 UTC m=+0.236753293 container start 207160be617639b53deeca094c3ff8c33ac68b62005ca11efb8d535ab2a36601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:47:33 compute-0 podman[426672]: 2025-12-03 18:47:33.567819028 +0000 UTC m=+0.240916365 container attach 207160be617639b53deeca094c3ff8c33ac68b62005ca11efb8d535ab2a36601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:47:33 compute-0 sharp_rosalind[426688]: 167 167
Dec  3 18:47:33 compute-0 systemd[1]: libpod-207160be617639b53deeca094c3ff8c33ac68b62005ca11efb8d535ab2a36601.scope: Deactivated successfully.
Dec  3 18:47:33 compute-0 conmon[426688]: conmon 207160be617639b53dee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-207160be617639b53deeca094c3ff8c33ac68b62005ca11efb8d535ab2a36601.scope/container/memory.events
Dec  3 18:47:33 compute-0 podman[426672]: 2025-12-03 18:47:33.57483342 +0000 UTC m=+0.247930737 container died 207160be617639b53deeca094c3ff8c33ac68b62005ca11efb8d535ab2a36601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:47:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfb5fc56d66b5ae501f015ce23a5f7c62a54107ae686232a1c8a8dd26eba2f01-merged.mount: Deactivated successfully.
Dec  3 18:47:33 compute-0 podman[426672]: 2025-12-03 18:47:33.62840111 +0000 UTC m=+0.301498417 container remove 207160be617639b53deeca094c3ff8c33ac68b62005ca11efb8d535ab2a36601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rosalind, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 18:47:33 compute-0 systemd[1]: libpod-conmon-207160be617639b53deeca094c3ff8c33ac68b62005ca11efb8d535ab2a36601.scope: Deactivated successfully.
Dec  3 18:47:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:33 compute-0 nova_compute[348325]: 2025-12-03 18:47:33.802 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:33 compute-0 podman[426710]: 2025-12-03 18:47:33.83024155 +0000 UTC m=+0.053701696 container create 918dc7287d41f2e3d603a5510636b8f88037289acb979746744624ba0891cc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:47:33 compute-0 systemd[1]: Started libpod-conmon-918dc7287d41f2e3d603a5510636b8f88037289acb979746744624ba0891cc1c.scope.
Dec  3 18:47:33 compute-0 podman[426710]: 2025-12-03 18:47:33.802390678 +0000 UTC m=+0.025850794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:47:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b75d7dc0480bad037e590288b990c8ecbf1791dd74437022579def4165a9ab7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b75d7dc0480bad037e590288b990c8ecbf1791dd74437022579def4165a9ab7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b75d7dc0480bad037e590288b990c8ecbf1791dd74437022579def4165a9ab7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b75d7dc0480bad037e590288b990c8ecbf1791dd74437022579def4165a9ab7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b75d7dc0480bad037e590288b990c8ecbf1791dd74437022579def4165a9ab7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:33 compute-0 podman[426710]: 2025-12-03 18:47:33.961277695 +0000 UTC m=+0.184737851 container init 918dc7287d41f2e3d603a5510636b8f88037289acb979746744624ba0891cc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:47:33 compute-0 podman[426710]: 2025-12-03 18:47:33.977592715 +0000 UTC m=+0.201052821 container start 918dc7287d41f2e3d603a5510636b8f88037289acb979746744624ba0891cc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:47:33 compute-0 podman[426710]: 2025-12-03 18:47:33.982849624 +0000 UTC m=+0.206309740 container attach 918dc7287d41f2e3d603a5510636b8f88037289acb979746744624ba0891cc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:47:35 compute-0 lucid_bassi[426727]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:47:35 compute-0 lucid_bassi[426727]: --> relative data size: 1.0
Dec  3 18:47:35 compute-0 lucid_bassi[426727]: --> All data devices are unavailable
Dec  3 18:47:35 compute-0 systemd[1]: libpod-918dc7287d41f2e3d603a5510636b8f88037289acb979746744624ba0891cc1c.scope: Deactivated successfully.
Dec  3 18:47:35 compute-0 systemd[1]: libpod-918dc7287d41f2e3d603a5510636b8f88037289acb979746744624ba0891cc1c.scope: Consumed 1.031s CPU time.
Dec  3 18:47:35 compute-0 conmon[426727]: conmon 918dc7287d41f2e3d603 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-918dc7287d41f2e3d603a5510636b8f88037289acb979746744624ba0891cc1c.scope/container/memory.events
Dec  3 18:47:35 compute-0 podman[426710]: 2025-12-03 18:47:35.09067671 +0000 UTC m=+1.314136816 container died 918dc7287d41f2e3d603a5510636b8f88037289acb979746744624ba0891cc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b75d7dc0480bad037e590288b990c8ecbf1791dd74437022579def4165a9ab7-merged.mount: Deactivated successfully.
Dec  3 18:47:35 compute-0 podman[426710]: 2025-12-03 18:47:35.157607358 +0000 UTC m=+1.381067464 container remove 918dc7287d41f2e3d603a5510636b8f88037289acb979746744624ba0891cc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 18:47:35 compute-0 systemd[1]: libpod-conmon-918dc7287d41f2e3d603a5510636b8f88037289acb979746744624ba0891cc1c.scope: Deactivated successfully.
Dec  3 18:47:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:36 compute-0 podman[426904]: 2025-12-03 18:47:36.004744526 +0000 UTC m=+0.066004847 container create 70c3640550772769f765096e04065f516534b329826b4793c7f9b48e3d642f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:47:36 compute-0 systemd[1]: Started libpod-conmon-70c3640550772769f765096e04065f516534b329826b4793c7f9b48e3d642f64.scope.
Dec  3 18:47:36 compute-0 podman[426904]: 2025-12-03 18:47:35.983141797 +0000 UTC m=+0.044402158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:47:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:47:36 compute-0 podman[426904]: 2025-12-03 18:47:36.107194243 +0000 UTC m=+0.168454634 container init 70c3640550772769f765096e04065f516534b329826b4793c7f9b48e3d642f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:47:36 compute-0 podman[426904]: 2025-12-03 18:47:36.115642789 +0000 UTC m=+0.176903130 container start 70c3640550772769f765096e04065f516534b329826b4793c7f9b48e3d642f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:47:36 compute-0 vigilant_joliot[426920]: 167 167
Dec  3 18:47:36 compute-0 systemd[1]: libpod-70c3640550772769f765096e04065f516534b329826b4793c7f9b48e3d642f64.scope: Deactivated successfully.
Dec  3 18:47:36 compute-0 podman[426904]: 2025-12-03 18:47:36.122508817 +0000 UTC m=+0.183769198 container attach 70c3640550772769f765096e04065f516534b329826b4793c7f9b48e3d642f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:47:36 compute-0 podman[426904]: 2025-12-03 18:47:36.123503911 +0000 UTC m=+0.184764262 container died 70c3640550772769f765096e04065f516534b329826b4793c7f9b48e3d642f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:47:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9381c6ad7204671ee1a69e2bdb43d144d5f2a0e0ae4568153459fb6e763126c-merged.mount: Deactivated successfully.
Dec  3 18:47:36 compute-0 podman[426904]: 2025-12-03 18:47:36.179252625 +0000 UTC m=+0.240512966 container remove 70c3640550772769f765096e04065f516534b329826b4793c7f9b48e3d642f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:47:36 compute-0 systemd[1]: libpod-conmon-70c3640550772769f765096e04065f516534b329826b4793c7f9b48e3d642f64.scope: Deactivated successfully.
Dec  3 18:47:36 compute-0 podman[426943]: 2025-12-03 18:47:36.4201619 +0000 UTC m=+0.073454438 container create 53fd1bc1929b63f5e4cc722738ced6dce52572d32e3b6e04d5171ca1723ad6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:47:36 compute-0 podman[426943]: 2025-12-03 18:47:36.379396552 +0000 UTC m=+0.032689160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:47:36 compute-0 systemd[1]: Started libpod-conmon-53fd1bc1929b63f5e4cc722738ced6dce52572d32e3b6e04d5171ca1723ad6c0.scope.
Dec  3 18:47:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c08bdf304befd1df7538812e5223ca3ed5665cef414cb1bd055268eceec6f0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c08bdf304befd1df7538812e5223ca3ed5665cef414cb1bd055268eceec6f0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c08bdf304befd1df7538812e5223ca3ed5665cef414cb1bd055268eceec6f0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c08bdf304befd1df7538812e5223ca3ed5665cef414cb1bd055268eceec6f0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:36 compute-0 podman[426943]: 2025-12-03 18:47:36.590482628 +0000 UTC m=+0.243775186 container init 53fd1bc1929b63f5e4cc722738ced6dce52572d32e3b6e04d5171ca1723ad6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 18:47:36 compute-0 podman[426943]: 2025-12-03 18:47:36.601925957 +0000 UTC m=+0.255218465 container start 53fd1bc1929b63f5e4cc722738ced6dce52572d32e3b6e04d5171ca1723ad6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 18:47:36 compute-0 podman[426943]: 2025-12-03 18:47:36.606716065 +0000 UTC m=+0.260008613 container attach 53fd1bc1929b63f5e4cc722738ced6dce52572d32e3b6e04d5171ca1723ad6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:47:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:47:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3347737869' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:47:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:47:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3347737869' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:47:37 compute-0 nova_compute[348325]: 2025-12-03 18:47:37.918 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]: {
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:    "0": [
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:        {
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "devices": [
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "/dev/loop3"
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            ],
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_name": "ceph_lv0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_size": "21470642176",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "name": "ceph_lv0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "tags": {
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.cluster_name": "ceph",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.crush_device_class": "",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.encrypted": "0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.osd_id": "0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.type": "block",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.vdo": "0"
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            },
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "type": "block",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "vg_name": "ceph_vg0"
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:        }
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:    ],
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:    "1": [
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:        {
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "devices": [
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "/dev/loop4"
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            ],
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_name": "ceph_lv1",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_size": "21470642176",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "name": "ceph_lv1",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "tags": {
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.cluster_name": "ceph",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.crush_device_class": "",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.encrypted": "0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.osd_id": "1",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.type": "block",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.vdo": "0"
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            },
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "type": "block",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "vg_name": "ceph_vg1"
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:        }
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:    ],
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:    "2": [
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:        {
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "devices": [
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "/dev/loop5"
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            ],
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_name": "ceph_lv2",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_size": "21470642176",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "name": "ceph_lv2",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "tags": {
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.cluster_name": "ceph",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.crush_device_class": "",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.encrypted": "0",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.osd_id": "2",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.type": "block",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:                "ceph.vdo": "0"
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            },
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "type": "block",
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:            "vg_name": "ceph_vg2"
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:        }
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]:    ]
Dec  3 18:47:38 compute-0 beautiful_wilson[426958]: }
Dec  3 18:47:38 compute-0 systemd[1]: libpod-53fd1bc1929b63f5e4cc722738ced6dce52572d32e3b6e04d5171ca1723ad6c0.scope: Deactivated successfully.
Dec  3 18:47:38 compute-0 conmon[426958]: conmon 53fd1bc1929b63f5e4cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53fd1bc1929b63f5e4cc722738ced6dce52572d32e3b6e04d5171ca1723ad6c0.scope/container/memory.events
Dec  3 18:47:38 compute-0 podman[426943]: 2025-12-03 18:47:38.044532305 +0000 UTC m=+1.697824843 container died 53fd1bc1929b63f5e4cc722738ced6dce52572d32e3b6e04d5171ca1723ad6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:47:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c08bdf304befd1df7538812e5223ca3ed5665cef414cb1bd055268eceec6f0f-merged.mount: Deactivated successfully.
Dec  3 18:47:38 compute-0 podman[426943]: 2025-12-03 18:47:38.102578536 +0000 UTC m=+1.755871044 container remove 53fd1bc1929b63f5e4cc722738ced6dce52572d32e3b6e04d5171ca1723ad6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:47:38 compute-0 systemd[1]: libpod-conmon-53fd1bc1929b63f5e4cc722738ced6dce52572d32e3b6e04d5171ca1723ad6c0.scope: Deactivated successfully.
Dec  3 18:47:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:38 compute-0 nova_compute[348325]: 2025-12-03 18:47:38.806 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:38 compute-0 podman[427116]: 2025-12-03 18:47:38.834523545 +0000 UTC m=+0.071881330 container create 273995389e577ef3b871658b7d7dac7a8d679e46089646ddd093856e775de5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_turing, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:47:38 compute-0 podman[427116]: 2025-12-03 18:47:38.80529207 +0000 UTC m=+0.042649915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:47:38 compute-0 systemd[1]: Started libpod-conmon-273995389e577ef3b871658b7d7dac7a8d679e46089646ddd093856e775de5b8.scope.
Dec  3 18:47:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:47:38 compute-0 podman[427116]: 2025-12-03 18:47:38.964124346 +0000 UTC m=+0.201482171 container init 273995389e577ef3b871658b7d7dac7a8d679e46089646ddd093856e775de5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_turing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:47:38 compute-0 podman[427116]: 2025-12-03 18:47:38.975599597 +0000 UTC m=+0.212957392 container start 273995389e577ef3b871658b7d7dac7a8d679e46089646ddd093856e775de5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_turing, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:47:38 compute-0 podman[427116]: 2025-12-03 18:47:38.979920524 +0000 UTC m=+0.217278349 container attach 273995389e577ef3b871658b7d7dac7a8d679e46089646ddd093856e775de5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_turing, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 18:47:38 compute-0 cranky_turing[427132]: 167 167
Dec  3 18:47:38 compute-0 systemd[1]: libpod-273995389e577ef3b871658b7d7dac7a8d679e46089646ddd093856e775de5b8.scope: Deactivated successfully.
Dec  3 18:47:38 compute-0 podman[427116]: 2025-12-03 18:47:38.987892548 +0000 UTC m=+0.225250353 container died 273995389e577ef3b871658b7d7dac7a8d679e46089646ddd093856e775de5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_turing, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:47:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec2cc91eff565be434b079a8c4f45f76555650214e928c30d9cb614930c41d64-merged.mount: Deactivated successfully.
Dec  3 18:47:39 compute-0 podman[427116]: 2025-12-03 18:47:39.034982561 +0000 UTC m=+0.272340356 container remove 273995389e577ef3b871658b7d7dac7a8d679e46089646ddd093856e775de5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_turing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:47:39 compute-0 systemd[1]: libpod-conmon-273995389e577ef3b871658b7d7dac7a8d679e46089646ddd093856e775de5b8.scope: Deactivated successfully.
Dec  3 18:47:39 compute-0 podman[427155]: 2025-12-03 18:47:39.247777337 +0000 UTC m=+0.055598871 container create c603164285c42a9a427b60582dad506fd1d54a46068db4f5b4a38c7ddd103bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:47:39 compute-0 systemd[1]: Started libpod-conmon-c603164285c42a9a427b60582dad506fd1d54a46068db4f5b4a38c7ddd103bc4.scope.
Dec  3 18:47:39 compute-0 podman[427155]: 2025-12-03 18:47:39.223546055 +0000 UTC m=+0.031367609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:47:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b59d405a443611e76e4c46ca47ff5971f2ac2bd9314fbf688c4f225f5b08637/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b59d405a443611e76e4c46ca47ff5971f2ac2bd9314fbf688c4f225f5b08637/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b59d405a443611e76e4c46ca47ff5971f2ac2bd9314fbf688c4f225f5b08637/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b59d405a443611e76e4c46ca47ff5971f2ac2bd9314fbf688c4f225f5b08637/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:47:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:39 compute-0 podman[427155]: 2025-12-03 18:47:39.352991191 +0000 UTC m=+0.160812725 container init c603164285c42a9a427b60582dad506fd1d54a46068db4f5b4a38c7ddd103bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:47:39 compute-0 podman[427155]: 2025-12-03 18:47:39.380374142 +0000 UTC m=+0.188195676 container start c603164285c42a9a427b60582dad506fd1d54a46068db4f5b4a38c7ddd103bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_morse, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 18:47:39 compute-0 podman[427155]: 2025-12-03 18:47:39.386074711 +0000 UTC m=+0.193896245 container attach c603164285c42a9a427b60582dad506fd1d54a46068db4f5b4a38c7ddd103bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Dec  3 18:47:40 compute-0 sleepy_morse[427170]: {
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "osd_id": 1,
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "type": "bluestore"
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:    },
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "osd_id": 2,
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "type": "bluestore"
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:    },
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "osd_id": 0,
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:        "type": "bluestore"
Dec  3 18:47:40 compute-0 sleepy_morse[427170]:    }
Dec  3 18:47:40 compute-0 sleepy_morse[427170]: }
Dec  3 18:47:40 compute-0 systemd[1]: libpod-c603164285c42a9a427b60582dad506fd1d54a46068db4f5b4a38c7ddd103bc4.scope: Deactivated successfully.
Dec  3 18:47:40 compute-0 systemd[1]: libpod-c603164285c42a9a427b60582dad506fd1d54a46068db4f5b4a38c7ddd103bc4.scope: Consumed 1.144s CPU time.
Dec  3 18:47:40 compute-0 podman[427155]: 2025-12-03 18:47:40.53731366 +0000 UTC m=+1.345135204 container died c603164285c42a9a427b60582dad506fd1d54a46068db4f5b4a38c7ddd103bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_morse, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:47:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b59d405a443611e76e4c46ca47ff5971f2ac2bd9314fbf688c4f225f5b08637-merged.mount: Deactivated successfully.
Dec  3 18:47:40 compute-0 podman[427155]: 2025-12-03 18:47:40.624299708 +0000 UTC m=+1.432121252 container remove c603164285c42a9a427b60582dad506fd1d54a46068db4f5b4a38c7ddd103bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_morse, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 18:47:40 compute-0 systemd[1]: libpod-conmon-c603164285c42a9a427b60582dad506fd1d54a46068db4f5b4a38c7ddd103bc4.scope: Deactivated successfully.
Dec  3 18:47:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:47:40 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:47:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:47:40 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:47:40 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev cc2340c3-b5ec-45f4-a288-901a2cb9d25c does not exist
Dec  3 18:47:40 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b10ea09c-1721-49c6-91e7-fd2c24d81cf1 does not exist
Dec  3 18:47:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:41 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:47:41 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.770277) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787661770407, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1525, "num_deletes": 507, "total_data_size": 1972618, "memory_usage": 2009920, "flush_reason": "Manual Compaction"}
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787661787040, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1931699, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28635, "largest_seqno": 30159, "table_properties": {"data_size": 1924903, "index_size": 3422, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 16858, "raw_average_key_size": 18, "raw_value_size": 1909382, "raw_average_value_size": 2128, "num_data_blocks": 154, "num_entries": 897, "num_filter_entries": 897, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764787534, "oldest_key_time": 1764787534, "file_creation_time": 1764787661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 16839 microseconds, and 9868 cpu microseconds.
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.787125) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1931699 bytes OK
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.787150) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.790161) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.790185) EVENT_LOG_v1 {"time_micros": 1764787661790178, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.790209) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1964844, prev total WAL file size 1964844, number of live WAL files 2.
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.792164) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1886KB)], [65(7110KB)]
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787661792284, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 9212420, "oldest_snapshot_seqno": -1}
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 4954 keys, 7370469 bytes, temperature: kUnknown
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787661873575, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7370469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7338168, "index_size": 18815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 125388, "raw_average_key_size": 25, "raw_value_size": 7249160, "raw_average_value_size": 1463, "num_data_blocks": 775, "num_entries": 4954, "num_filter_entries": 4954, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764787661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.873857) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7370469 bytes
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.875798) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.2 rd, 90.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 6.9 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(8.6) write-amplify(3.8) OK, records in: 5981, records dropped: 1027 output_compression: NoCompression
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.875813) EVENT_LOG_v1 {"time_micros": 1764787661875805, "job": 36, "event": "compaction_finished", "compaction_time_micros": 81373, "compaction_time_cpu_micros": 39876, "output_level": 6, "num_output_files": 1, "total_output_size": 7370469, "num_input_records": 5981, "num_output_records": 4954, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787661876290, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787661877656, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.791892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.877831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.877835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.877837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.877838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:47:41 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:47:41.877839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:47:42 compute-0 nova_compute[348325]: 2025-12-03 18:47:42.923 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:42 compute-0 podman[427267]: 2025-12-03 18:47:42.984621615 +0000 UTC m=+0.143852322 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:47:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:43 compute-0 nova_compute[348325]: 2025-12-03 18:47:43.810 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:47:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:47:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:47:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:47:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:47:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:47:44 compute-0 podman[427291]: 2025-12-03 18:47:44.945197119 +0000 UTC m=+0.103962855 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Dec  3 18:47:45 compute-0 podman[427309]: 2025-12-03 18:47:45.1892629 +0000 UTC m=+0.214607732 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  3 18:47:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:47 compute-0 nova_compute[348325]: 2025-12-03 18:47:47.924 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:48 compute-0 nova_compute[348325]: 2025-12-03 18:47:48.813 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:52 compute-0 nova_compute[348325]: 2025-12-03 18:47:52.926 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:53 compute-0 nova_compute[348325]: 2025-12-03 18:47:53.817 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:57 compute-0 nova_compute[348325]: 2025-12-03 18:47:57.928 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:47:58 compute-0 nova_compute[348325]: 2025-12-03 18:47:58.819 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:47:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:47:59 compute-0 podman[158200]: time="2025-12-03T18:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:47:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:47:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec  3 18:47:59 compute-0 podman[427337]: 2025-12-03 18:47:59.911949772 +0000 UTC m=+0.083377102 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:47:59 compute-0 podman[427338]: 2025-12-03 18:47:59.920785699 +0000 UTC m=+0.084425829 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9)
Dec  3 18:47:59 compute-0 podman[427336]: 2025-12-03 18:47:59.926211582 +0000 UTC m=+0.094831803 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:48:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:01 compute-0 openstack_network_exporter[365222]: ERROR   18:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:48:01 compute-0 openstack_network_exporter[365222]: ERROR   18:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:48:01 compute-0 openstack_network_exporter[365222]: ERROR   18:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:48:01 compute-0 openstack_network_exporter[365222]: ERROR   18:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:48:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:48:01 compute-0 openstack_network_exporter[365222]: ERROR   18:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:48:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:48:01 compute-0 podman[427400]: 2025-12-03 18:48:01.928237603 +0000 UTC m=+0.089635285 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  3 18:48:01 compute-0 podman[427399]: 2025-12-03 18:48:01.928958131 +0000 UTC m=+0.097257602 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_managed=true)
Dec  3 18:48:01 compute-0 podman[427398]: 2025-12-03 18:48:01.940611396 +0000 UTC m=+0.109201695 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.openshift.expose-services=, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 18:48:02 compute-0 nova_compute[348325]: 2025-12-03 18:48:02.930 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:03 compute-0 nova_compute[348325]: 2025-12-03 18:48:03.822 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:07 compute-0 nova_compute[348325]: 2025-12-03 18:48:07.932 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:08 compute-0 nova_compute[348325]: 2025-12-03 18:48:08.551 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:48:08 compute-0 nova_compute[348325]: 2025-12-03 18:48:08.551 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:48:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:08 compute-0 nova_compute[348325]: 2025-12-03 18:48:08.823 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:10 compute-0 nova_compute[348325]: 2025-12-03 18:48:10.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:48:10 compute-0 nova_compute[348325]: 2025-12-03 18:48:10.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:48:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:12 compute-0 nova_compute[348325]: 2025-12-03 18:48:12.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:48:12 compute-0 nova_compute[348325]: 2025-12-03 18:48:12.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:48:12 compute-0 nova_compute[348325]: 2025-12-03 18:48:12.936 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:13 compute-0 nova_compute[348325]: 2025-12-03 18:48:13.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:48:13 compute-0 nova_compute[348325]: 2025-12-03 18:48:13.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:48:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:13 compute-0 nova_compute[348325]: 2025-12-03 18:48:13.826 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:13 compute-0 nova_compute[348325]: 2025-12-03 18:48:13.845 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:48:13 compute-0 nova_compute[348325]: 2025-12-03 18:48:13.846 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:48:13 compute-0 nova_compute[348325]: 2025-12-03 18:48:13.846 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:48:13
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta', 'volumes', 'vms', 'images', 'default.rgw.control']
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:48:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:48:13 compute-0 podman[427455]: 2025-12-03 18:48:13.978383962 +0000 UTC m=+0.128188100 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.332 348329 DEBUG oslo_concurrency.lockutils [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "df72d527-943e-4e8c-b62a-63afa5f18261" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.332 348329 DEBUG oslo_concurrency.lockutils [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.333 348329 DEBUG oslo_concurrency.lockutils [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.333 348329 DEBUG oslo_concurrency.lockutils [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.334 348329 DEBUG oslo_concurrency.lockutils [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.335 348329 INFO nova.compute.manager [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Terminating instance#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.336 348329 DEBUG nova.compute.manager [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 18:48:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:48:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:48:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:48:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:48:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:48:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:48:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:48:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:48:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:48:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:48:14 compute-0 kernel: tap03bf6208-f4 (unregistering): left promiscuous mode
Dec  3 18:48:14 compute-0 NetworkManager[49087]: <info>  [1764787694.4688] device (tap03bf6208-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 18:48:14 compute-0 ovn_controller[89305]: 2025-12-03T18:48:14Z|00050|binding|INFO|Releasing lport 03bf6208-f40b-4534-a297-122588172fa5 from this chassis (sb_readonly=0)
Dec  3 18:48:14 compute-0 ovn_controller[89305]: 2025-12-03T18:48:14Z|00051|binding|INFO|Setting lport 03bf6208-f40b-4534-a297-122588172fa5 down in Southbound
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.479 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:14 compute-0 ovn_controller[89305]: 2025-12-03T18:48:14Z|00052|binding|INFO|Removing iface tap03bf6208-f4 ovn-installed in OVS
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.485 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.492 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:ba:29 192.168.0.170'], port_security=['fa:16:3e:41:ba:29 192.168.0.170'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-4jfpq66btob3-hjy2dfx75wfw-5fmurbrh4hte-port-kiigdzr3s4cr', 'neutron:cidrs': '192.168.0.170/24', 'neutron:device_id': 'df72d527-943e-4e8c-b62a-63afa5f18261', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-4jfpq66btob3-hjy2dfx75wfw-5fmurbrh4hte-port-kiigdzr3s4cr', 'neutron:project_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e48052e-a2fd-4fc1-8ebd-22e3b6e0bd66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12999ead-9a54-49b3-a532-a5f8bdddaf16, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=03bf6208-f40b-4534-a297-122588172fa5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.493 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 03bf6208-f40b-4534-a297-122588172fa5 in datapath 85c8d446-ad7f-4d1b-a311-89b0b07e8aad unbound from our chassis#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.495 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85c8d446-ad7f-4d1b-a311-89b0b07e8aad#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.500 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.515 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[a6642d20-ebba-4ecf-bd15-56ef4823626b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:48:14 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec  3 18:48:14 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 6.722s CPU time.
Dec  3 18:48:14 compute-0 systemd-machined[138702]: Machine qemu-2-instance-00000002 terminated.
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.555 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[f91ea3c3-4ed0-4cbf-8012-6bf04225958d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.559 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[981855ab-56b6-491a-9274-367e6b722f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.581 348329 INFO nova.virt.libvirt.driver [-] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Instance destroyed successfully.#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.581 348329 DEBUG nova.objects.instance [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'resources' on Instance uuid df72d527-943e-4e8c-b62a-63afa5f18261 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.586 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[852e623c-d795-46b2-97f7-ded300ecef89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.595 348329 DEBUG nova.virt.libvirt.vif [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:37:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-66btob3-hjy2dfx75wfw-5fmurbrh4hte-vnf-qa644it4tdj5',id=2,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:37:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b322e118-e1cc-40be-8d8c-553648144092'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-ben8kdr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:37:50Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM3ODUyMjc3NDE3MTY5NTY4NDc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzc4NTIyNzc0MTcxNjk1Njg0Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM3ODUyMjc3NDE3MTY5NTY4NDc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  3 18:48:14 compute-0 nova_compute[348325]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mzc4NTIyNzc0MTcxNjk1Njg0Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM3ODUyMjc3NDE3MTY5NTY4NDc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNzg1MjI3NzQxNzE2OTU2ODQ3PT0tLQo=',user_id='56338958b09445f5af9aa9e4601a1a8a',uuid=df72d527-943e-4e8c-b62a-63afa5f18261,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.596 348329 DEBUG nova.network.os_vif_util [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.596 348329 DEBUG nova.network.os_vif_util [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:ba:29,bridge_name='br-int',has_traffic_filtering=True,id=03bf6208-f40b-4534-a297-122588172fa5,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap03bf6208-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.597 348329 DEBUG os_vif [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:ba:29,bridge_name='br-int',has_traffic_filtering=True,id=03bf6208-f40b-4534-a297-122588172fa5,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap03bf6208-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.599 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.599 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03bf6208-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.601 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.603 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.606 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.609 348329 INFO os_vif [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:ba:29,bridge_name='br-int',has_traffic_filtering=True,id=03bf6208-f40b-4534-a297-122588172fa5,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap03bf6208-f4')#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.609 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[96a2ef9c-ea0c-462a-96b2-a0db0047a366]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85c8d446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:c1:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 11, 'rx_bytes': 574, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 11, 'rx_bytes': 574, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527503, 'reachable_time': 41701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427502, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.633 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6b7ab7-6128-47ad-abb1-3e1e9826659a]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527519, 'tstamp': 527519}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427503, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527523, 'tstamp': 527523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427503, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.634 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85c8d446-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.638 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85c8d446-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.638 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.639 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85c8d446-a0, col_values=(('external_ids', {'iface-id': '4db8340d-afa3-4a82-bd51-bca0a752f53f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.639 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.639 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:14 compute-0 ceph-mgr[193091]: client.0 ms_handle_reset on v2:192.168.122.100:6800/817799961
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.836 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:48:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:14.837 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:48:14 compute-0 rsyslogd[188590]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 18:48:14.595 348329 DEBUG nova.virt.libvirt.vif [None req-b614503a-61 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.838 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.868 348329 DEBUG nova.compute.manager [req-3de57b4b-f829-4ef9-a4fb-f2d20aa10095 req-b44bf8f4-4074-4321-85ef-ccc234814d2d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Received event network-vif-unplugged-03bf6208-f40b-4534-a297-122588172fa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.869 348329 DEBUG oslo_concurrency.lockutils [req-3de57b4b-f829-4ef9-a4fb-f2d20aa10095 req-b44bf8f4-4074-4321-85ef-ccc234814d2d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.869 348329 DEBUG oslo_concurrency.lockutils [req-3de57b4b-f829-4ef9-a4fb-f2d20aa10095 req-b44bf8f4-4074-4321-85ef-ccc234814d2d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.869 348329 DEBUG oslo_concurrency.lockutils [req-3de57b4b-f829-4ef9-a4fb-f2d20aa10095 req-b44bf8f4-4074-4321-85ef-ccc234814d2d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.869 348329 DEBUG nova.compute.manager [req-3de57b4b-f829-4ef9-a4fb-f2d20aa10095 req-b44bf8f4-4074-4321-85ef-ccc234814d2d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] No waiting events found dispatching network-vif-unplugged-03bf6208-f40b-4534-a297-122588172fa5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:48:14 compute-0 nova_compute[348325]: 2025-12-03 18:48:14.870 348329 DEBUG nova.compute.manager [req-3de57b4b-f829-4ef9-a4fb-f2d20aa10095 req-b44bf8f4-4074-4321-85ef-ccc234814d2d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Received event network-vif-unplugged-03bf6208-f40b-4534-a297-122588172fa5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 18:48:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Dec  3 18:48:15 compute-0 podman[427524]: 2025-12-03 18:48:15.922066435 +0000 UTC m=+0.081037625 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:48:15 compute-0 podman[427523]: 2025-12-03 18:48:15.972194392 +0000 UTC m=+0.133041908 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:15.999 348329 INFO nova.virt.libvirt.driver [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Deleting instance files /var/lib/nova/instances/df72d527-943e-4e8c-b62a-63afa5f18261_del#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.000 348329 INFO nova.virt.libvirt.driver [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Deletion of /var/lib/nova/instances/df72d527-943e-4e8c-b62a-63afa5f18261_del complete#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.100 348329 DEBUG nova.virt.libvirt.host [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.100 348329 INFO nova.virt.libvirt.host [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] UEFI support detected#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.102 348329 INFO nova.compute.manager [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Took 1.77 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.103 348329 DEBUG oslo.service.loopingcall [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.103 348329 DEBUG nova.compute.manager [-] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.104 348329 DEBUG nova.network.neutron [-] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.628 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updating instance_info_cache with network_info: [{"id": "03bf6208-f40b-4534-a297-122588172fa5", "address": "fa:16:3e:41:ba:29", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03bf6208-f4", "ovs_interfaceid": "03bf6208-f40b-4534-a297-122588172fa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.652 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.652 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.652 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.652 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.981 348329 DEBUG nova.compute.manager [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Received event network-vif-plugged-03bf6208-f40b-4534-a297-122588172fa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.982 348329 DEBUG oslo_concurrency.lockutils [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.982 348329 DEBUG oslo_concurrency.lockutils [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.982 348329 DEBUG oslo_concurrency.lockutils [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.982 348329 DEBUG nova.compute.manager [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] No waiting events found dispatching network-vif-plugged-03bf6208-f40b-4534-a297-122588172fa5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.982 348329 WARNING nova.compute.manager [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Received unexpected event network-vif-plugged-03bf6208-f40b-4534-a297-122588172fa5 for instance with vm_state active and task_state deleting.#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.982 348329 DEBUG nova.compute.manager [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Received event network-changed-03bf6208-f40b-4534-a297-122588172fa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.983 348329 DEBUG nova.compute.manager [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Refreshing instance network info cache due to event network-changed-03bf6208-f40b-4534-a297-122588172fa5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.983 348329 DEBUG oslo_concurrency.lockutils [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.983 348329 DEBUG oslo_concurrency.lockutils [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:48:16 compute-0 nova_compute[348325]: 2025-12-03 18:48:16.983 348329 DEBUG nova.network.neutron [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Refreshing network info cache for port 03bf6208-f40b-4534-a297-122588172fa5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:48:17 compute-0 nova_compute[348325]: 2025-12-03 18:48:17.205 348329 INFO nova.network.neutron [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Port 03bf6208-f40b-4534-a297-122588172fa5 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  3 18:48:17 compute-0 nova_compute[348325]: 2025-12-03 18:48:17.205 348329 DEBUG nova.network.neutron [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:48:17 compute-0 nova_compute[348325]: 2025-12-03 18:48:17.240 348329 DEBUG oslo_concurrency.lockutils [req-92618825-71fb-439a-84dc-ee6d5ee9d7b9 req-a939f5ba-03d3-4d97-a65c-c8375780be9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-df72d527-943e-4e8c-b62a-63afa5f18261" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:48:17 compute-0 nova_compute[348325]: 2025-12-03 18:48:17.325 348329 DEBUG nova.network.neutron [-] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:48:17 compute-0 nova_compute[348325]: 2025-12-03 18:48:17.341 348329 INFO nova.compute.manager [-] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Took 1.24 seconds to deallocate network for instance.#033[00m
Dec  3 18:48:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 245 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 767 B/s wr, 19 op/s
Dec  3 18:48:17 compute-0 nova_compute[348325]: 2025-12-03 18:48:17.389 348329 DEBUG oslo_concurrency.lockutils [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:48:17 compute-0 nova_compute[348325]: 2025-12-03 18:48:17.390 348329 DEBUG oslo_concurrency.lockutils [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:48:17 compute-0 nova_compute[348325]: 2025-12-03 18:48:17.520 348329 DEBUG oslo_concurrency.processutils [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:48:17 compute-0 nova_compute[348325]: 2025-12-03 18:48:17.938 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:48:18 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157097760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:48:18 compute-0 nova_compute[348325]: 2025-12-03 18:48:18.306 348329 DEBUG oslo_concurrency.processutils [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:48:18 compute-0 nova_compute[348325]: 2025-12-03 18:48:18.318 348329 DEBUG nova.compute.provider_tree [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:48:18 compute-0 nova_compute[348325]: 2025-12-03 18:48:18.338 348329 DEBUG nova.scheduler.client.report [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:48:18 compute-0 nova_compute[348325]: 2025-12-03 18:48:18.368 348329 DEBUG oslo_concurrency.lockutils [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:48:18 compute-0 nova_compute[348325]: 2025-12-03 18:48:18.392 348329 INFO nova.scheduler.client.report [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Deleted allocations for instance df72d527-943e-4e8c-b62a-63afa5f18261#033[00m
Dec  3 18:48:18 compute-0 nova_compute[348325]: 2025-12-03 18:48:18.485 348329 DEBUG oslo_concurrency.lockutils [None req-b614503a-617f-477e-82e6-dfcb45a19181 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "df72d527-943e-4e8c-b62a-63afa5f18261" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:48:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:18 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:18.844 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:48:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 218 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 767 B/s wr, 29 op/s
Dec  3 18:48:19 compute-0 nova_compute[348325]: 2025-12-03 18:48:19.602 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:20 compute-0 nova_compute[348325]: 2025-12-03 18:48:20.643 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:48:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:48:21 compute-0 nova_compute[348325]: 2025-12-03 18:48:21.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:48:21 compute-0 nova_compute[348325]: 2025-12-03 18:48:21.528 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:48:21 compute-0 nova_compute[348325]: 2025-12-03 18:48:21.529 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:48:21 compute-0 nova_compute[348325]: 2025-12-03 18:48:21.529 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:48:21 compute-0 nova_compute[348325]: 2025-12-03 18:48:21.530 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:48:21 compute-0 nova_compute[348325]: 2025-12-03 18:48:21.530 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:48:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:48:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307720647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.004 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.228 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.229 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.229 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.234 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.234 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.235 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.240 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.241 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.241 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.605 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.606 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3431MB free_disk=59.88887023925781GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.606 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.607 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.714 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.715 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance de3992c5-c1ad-4da3-9276-954d6365c3c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.715 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a6019a9c-c065-49d8-bef3-219bd2c79d8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.715 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.716 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.799 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:48:22 compute-0 nova_compute[348325]: 2025-12-03 18:48:22.942 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:48:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1415676332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:48:23 compute-0 nova_compute[348325]: 2025-12-03 18:48:23.271 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:48:23 compute-0 nova_compute[348325]: 2025-12-03 18:48:23.279 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:48:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:23.343 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:48:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:23.344 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:48:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:48:23.344 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:48:23 compute-0 nova_compute[348325]: 2025-12-03 18:48:23.345 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:48:23 compute-0 nova_compute[348325]: 2025-12-03 18:48:23.348 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:48:23 compute-0 nova_compute[348325]: 2025-12-03 18:48:23.348 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:48:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:48:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016571738458032168 of space, bias 1.0, pg target 0.49715215374096505 quantized to 32 (current 32)
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:48:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:48:24 compute-0 nova_compute[348325]: 2025-12-03 18:48:24.606 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:48:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 38 op/s
Dec  3 18:48:27 compute-0 nova_compute[348325]: 2025-12-03 18:48:27.944 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1023 B/s wr, 20 op/s
Dec  3 18:48:29 compute-0 nova_compute[348325]: 2025-12-03 18:48:29.580 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764787694.577782, df72d527-943e-4e8c-b62a-63afa5f18261 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:48:29 compute-0 nova_compute[348325]: 2025-12-03 18:48:29.581 348329 INFO nova.compute.manager [-] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] VM Stopped (Lifecycle Event)#033[00m
Dec  3 18:48:29 compute-0 nova_compute[348325]: 2025-12-03 18:48:29.609 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:29 compute-0 nova_compute[348325]: 2025-12-03 18:48:29.619 348329 DEBUG nova.compute.manager [None req-b5b24f61-c9c2-4e4d-97dd-96884389f865 - - - - - -] [instance: df72d527-943e-4e8c-b62a-63afa5f18261] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:48:29 compute-0 podman[158200]: time="2025-12-03T18:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:48:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:48:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8643 "" "Go-http-client/1.1"
Dec  3 18:48:30 compute-0 podman[427639]: 2025-12-03 18:48:30.919940964 +0000 UTC m=+0.085309490 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:48:30 compute-0 podman[427638]: 2025-12-03 18:48:30.920828845 +0000 UTC m=+0.080200764 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:48:30 compute-0 podman[427640]: 2025-12-03 18:48:30.937575095 +0000 UTC m=+0.099440476 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal)
Dec  3 18:48:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 1023 B/s wr, 10 op/s
Dec  3 18:48:31 compute-0 openstack_network_exporter[365222]: ERROR   18:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:48:31 compute-0 openstack_network_exporter[365222]: ERROR   18:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:48:31 compute-0 openstack_network_exporter[365222]: ERROR   18:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:48:31 compute-0 openstack_network_exporter[365222]: ERROR   18:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:48:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:48:31 compute-0 openstack_network_exporter[365222]: ERROR   18:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:48:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:48:32 compute-0 podman[427701]: 2025-12-03 18:48:32.917697201 +0000 UTC m=+0.077589531 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 18:48:32 compute-0 podman[427700]: 2025-12-03 18:48:32.918161902 +0000 UTC m=+0.084559351 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, name=ubi9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 18:48:32 compute-0 podman[427702]: 2025-12-03 18:48:32.923149584 +0000 UTC m=+0.078219856 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:48:32 compute-0 nova_compute[348325]: 2025-12-03 18:48:32.945 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:34 compute-0 nova_compute[348325]: 2025-12-03 18:48:34.612 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:48:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/226462012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:48:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:48:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/226462012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:48:37 compute-0 nova_compute[348325]: 2025-12-03 18:48:37.949 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:39 compute-0 nova_compute[348325]: 2025-12-03 18:48:39.615 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:48:41 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:48:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:48:41 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:48:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:48:42 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:48:42 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev aac17ddf-50e4-4f38-a4ef-de4015d7f5f8 does not exist
Dec  3 18:48:42 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d70906d8-413a-4f83-9e0e-4539aa1824fa does not exist
Dec  3 18:48:42 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b051db37-c85f-49d4-8b24-9a3f5bd0468a does not exist
Dec  3 18:48:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:48:42 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:48:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:48:42 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:48:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:48:42 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:48:42 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:48:42 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:48:42 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:48:42 compute-0 nova_compute[348325]: 2025-12-03 18:48:42.951 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:43 compute-0 podman[428025]: 2025-12-03 18:48:43.127065767 +0000 UTC m=+0.051870941 container create 70035d59f6ee27fbdcf161894f63c98fb9b5169aa5dcfda08c50d6d5ba95ebea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_solomon, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:48:43 compute-0 podman[428025]: 2025-12-03 18:48:43.105006798 +0000 UTC m=+0.029811992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:48:43 compute-0 systemd[1]: Started libpod-conmon-70035d59f6ee27fbdcf161894f63c98fb9b5169aa5dcfda08c50d6d5ba95ebea.scope.
Dec  3 18:48:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:48:43 compute-0 podman[428025]: 2025-12-03 18:48:43.299881188 +0000 UTC m=+0.224686402 container init 70035d59f6ee27fbdcf161894f63c98fb9b5169aa5dcfda08c50d6d5ba95ebea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_solomon, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 18:48:43 compute-0 podman[428025]: 2025-12-03 18:48:43.30852762 +0000 UTC m=+0.233332794 container start 70035d59f6ee27fbdcf161894f63c98fb9b5169aa5dcfda08c50d6d5ba95ebea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_solomon, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:48:43 compute-0 podman[428025]: 2025-12-03 18:48:43.312537118 +0000 UTC m=+0.237342332 container attach 70035d59f6ee27fbdcf161894f63c98fb9b5169aa5dcfda08c50d6d5ba95ebea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:48:43 compute-0 compassionate_solomon[428041]: 167 167
Dec  3 18:48:43 compute-0 systemd[1]: libpod-70035d59f6ee27fbdcf161894f63c98fb9b5169aa5dcfda08c50d6d5ba95ebea.scope: Deactivated successfully.
Dec  3 18:48:43 compute-0 podman[428025]: 2025-12-03 18:48:43.31714257 +0000 UTC m=+0.241947744 container died 70035d59f6ee27fbdcf161894f63c98fb9b5169aa5dcfda08c50d6d5ba95ebea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_solomon, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:48:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-157c66865a2ec9024cc20ed8f3b06934c7d58f2029b1c7ee041f6caef3155c7c-merged.mount: Deactivated successfully.
Dec  3 18:48:43 compute-0 podman[428025]: 2025-12-03 18:48:43.372105506 +0000 UTC m=+0.296910680 container remove 70035d59f6ee27fbdcf161894f63c98fb9b5169aa5dcfda08c50d6d5ba95ebea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:48:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:43 compute-0 systemd[1]: libpod-conmon-70035d59f6ee27fbdcf161894f63c98fb9b5169aa5dcfda08c50d6d5ba95ebea.scope: Deactivated successfully.
Dec  3 18:48:43 compute-0 podman[428063]: 2025-12-03 18:48:43.59450112 +0000 UTC m=+0.047683388 container create e044f81d5f60417814b3b52d0887e6ed7a0689c71026dcfe1a1205ad60736e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_greider, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:48:43 compute-0 systemd[1]: Started libpod-conmon-e044f81d5f60417814b3b52d0887e6ed7a0689c71026dcfe1a1205ad60736e06.scope.
Dec  3 18:48:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:43 compute-0 podman[428063]: 2025-12-03 18:48:43.573638239 +0000 UTC m=+0.026820537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:48:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfb6db62d1f33470892b70d76c2c4df903c76cde60dfa1f54874d96dafc2ea9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfb6db62d1f33470892b70d76c2c4df903c76cde60dfa1f54874d96dafc2ea9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfb6db62d1f33470892b70d76c2c4df903c76cde60dfa1f54874d96dafc2ea9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfb6db62d1f33470892b70d76c2c4df903c76cde60dfa1f54874d96dafc2ea9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfb6db62d1f33470892b70d76c2c4df903c76cde60dfa1f54874d96dafc2ea9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:43 compute-0 podman[428063]: 2025-12-03 18:48:43.691964597 +0000 UTC m=+0.145146905 container init e044f81d5f60417814b3b52d0887e6ed7a0689c71026dcfe1a1205ad60736e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:48:43 compute-0 podman[428063]: 2025-12-03 18:48:43.705669622 +0000 UTC m=+0.158851890 container start e044f81d5f60417814b3b52d0887e6ed7a0689c71026dcfe1a1205ad60736e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:48:43 compute-0 podman[428063]: 2025-12-03 18:48:43.709790932 +0000 UTC m=+0.162973190 container attach e044f81d5f60417814b3b52d0887e6ed7a0689c71026dcfe1a1205ad60736e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:48:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:48:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:48:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:48:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:48:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:48:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:48:44 compute-0 nova_compute[348325]: 2025-12-03 18:48:44.618 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:44 compute-0 podman[428101]: 2025-12-03 18:48:44.74599923 +0000 UTC m=+0.066691004 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:48:44 compute-0 eloquent_greider[428079]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:48:44 compute-0 eloquent_greider[428079]: --> relative data size: 1.0
Dec  3 18:48:44 compute-0 eloquent_greider[428079]: --> All data devices are unavailable
Dec  3 18:48:44 compute-0 systemd[1]: libpod-e044f81d5f60417814b3b52d0887e6ed7a0689c71026dcfe1a1205ad60736e06.scope: Deactivated successfully.
Dec  3 18:48:44 compute-0 podman[428063]: 2025-12-03 18:48:44.818039664 +0000 UTC m=+1.271221932 container died e044f81d5f60417814b3b52d0887e6ed7a0689c71026dcfe1a1205ad60736e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:48:44 compute-0 systemd[1]: libpod-e044f81d5f60417814b3b52d0887e6ed7a0689c71026dcfe1a1205ad60736e06.scope: Consumed 1.030s CPU time.
Dec  3 18:48:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cfb6db62d1f33470892b70d76c2c4df903c76cde60dfa1f54874d96dafc2ea9-merged.mount: Deactivated successfully.
Dec  3 18:48:44 compute-0 podman[428063]: 2025-12-03 18:48:44.886642883 +0000 UTC m=+1.339825151 container remove e044f81d5f60417814b3b52d0887e6ed7a0689c71026dcfe1a1205ad60736e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_greider, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:48:44 compute-0 systemd[1]: libpod-conmon-e044f81d5f60417814b3b52d0887e6ed7a0689c71026dcfe1a1205ad60736e06.scope: Deactivated successfully.
Dec  3 18:48:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:45 compute-0 podman[428281]: 2025-12-03 18:48:45.624734162 +0000 UTC m=+0.048562140 container create 39fd61b73071aa99bdeca36e8c111a0ce1a480f80fd52e5579b67897806ea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec  3 18:48:45 compute-0 systemd[1]: Started libpod-conmon-39fd61b73071aa99bdeca36e8c111a0ce1a480f80fd52e5579b67897806ea427.scope.
Dec  3 18:48:45 compute-0 podman[428281]: 2025-12-03 18:48:45.604825695 +0000 UTC m=+0.028653693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:48:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:48:45 compute-0 podman[428281]: 2025-12-03 18:48:45.731852525 +0000 UTC m=+0.155680593 container init 39fd61b73071aa99bdeca36e8c111a0ce1a480f80fd52e5579b67897806ea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 18:48:45 compute-0 podman[428281]: 2025-12-03 18:48:45.743838538 +0000 UTC m=+0.167666516 container start 39fd61b73071aa99bdeca36e8c111a0ce1a480f80fd52e5579b67897806ea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:48:45 compute-0 podman[428281]: 2025-12-03 18:48:45.749008755 +0000 UTC m=+0.172836773 container attach 39fd61b73071aa99bdeca36e8c111a0ce1a480f80fd52e5579b67897806ea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_black, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:48:45 compute-0 happy_black[428297]: 167 167
Dec  3 18:48:45 compute-0 systemd[1]: libpod-39fd61b73071aa99bdeca36e8c111a0ce1a480f80fd52e5579b67897806ea427.scope: Deactivated successfully.
Dec  3 18:48:45 compute-0 podman[428281]: 2025-12-03 18:48:45.752966412 +0000 UTC m=+0.176794400 container died 39fd61b73071aa99bdeca36e8c111a0ce1a480f80fd52e5579b67897806ea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:48:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-38eb8dc3342791012cf4cead8fda9f49ccba3567d8ca7168ab19b4e7a1687828-merged.mount: Deactivated successfully.
Dec  3 18:48:45 compute-0 podman[428281]: 2025-12-03 18:48:45.800246199 +0000 UTC m=+0.224074177 container remove 39fd61b73071aa99bdeca36e8c111a0ce1a480f80fd52e5579b67897806ea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_black, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:48:45 compute-0 systemd[1]: libpod-conmon-39fd61b73071aa99bdeca36e8c111a0ce1a480f80fd52e5579b67897806ea427.scope: Deactivated successfully.
Dec  3 18:48:45 compute-0 podman[428319]: 2025-12-03 18:48:45.997652432 +0000 UTC m=+0.053927022 container create ab31402137903b980b65528e47170deb009c13e695efcb5f1bb84e1630c617f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 18:48:46 compute-0 systemd[1]: Started libpod-conmon-ab31402137903b980b65528e47170deb009c13e695efcb5f1bb84e1630c617f7.scope.
Dec  3 18:48:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:48:46 compute-0 podman[428319]: 2025-12-03 18:48:45.975196851 +0000 UTC m=+0.031471461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:48:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4882c80d13f3e3324637e1bad10acb81d9965522397fa67804f1a8b1ccc1bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4882c80d13f3e3324637e1bad10acb81d9965522397fa67804f1a8b1ccc1bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4882c80d13f3e3324637e1bad10acb81d9965522397fa67804f1a8b1ccc1bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4882c80d13f3e3324637e1bad10acb81d9965522397fa67804f1a8b1ccc1bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:46 compute-0 podman[428319]: 2025-12-03 18:48:46.090605007 +0000 UTC m=+0.146879597 container init ab31402137903b980b65528e47170deb009c13e695efcb5f1bb84e1630c617f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:48:46 compute-0 podman[428319]: 2025-12-03 18:48:46.106309362 +0000 UTC m=+0.162583952 container start ab31402137903b980b65528e47170deb009c13e695efcb5f1bb84e1630c617f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:48:46 compute-0 podman[428319]: 2025-12-03 18:48:46.110563956 +0000 UTC m=+0.166838546 container attach ab31402137903b980b65528e47170deb009c13e695efcb5f1bb84e1630c617f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:48:46 compute-0 podman[428336]: 2025-12-03 18:48:46.170526694 +0000 UTC m=+0.121405514 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:48:46 compute-0 podman[428333]: 2025-12-03 18:48:46.183567973 +0000 UTC m=+0.133867118 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:48:46 compute-0 fervent_euler[428337]: {
Dec  3 18:48:46 compute-0 fervent_euler[428337]:    "0": [
Dec  3 18:48:46 compute-0 fervent_euler[428337]:        {
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "devices": [
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "/dev/loop3"
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            ],
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_name": "ceph_lv0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_size": "21470642176",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "name": "ceph_lv0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "tags": {
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.cluster_name": "ceph",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.crush_device_class": "",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.encrypted": "0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.osd_id": "0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.type": "block",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.vdo": "0"
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            },
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "type": "block",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "vg_name": "ceph_vg0"
Dec  3 18:48:46 compute-0 fervent_euler[428337]:        }
Dec  3 18:48:46 compute-0 fervent_euler[428337]:    ],
Dec  3 18:48:46 compute-0 fervent_euler[428337]:    "1": [
Dec  3 18:48:46 compute-0 fervent_euler[428337]:        {
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "devices": [
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "/dev/loop4"
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            ],
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_name": "ceph_lv1",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_size": "21470642176",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "name": "ceph_lv1",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "tags": {
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.cluster_name": "ceph",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.crush_device_class": "",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.encrypted": "0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.osd_id": "1",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.type": "block",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.vdo": "0"
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            },
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "type": "block",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "vg_name": "ceph_vg1"
Dec  3 18:48:46 compute-0 fervent_euler[428337]:        }
Dec  3 18:48:46 compute-0 fervent_euler[428337]:    ],
Dec  3 18:48:46 compute-0 fervent_euler[428337]:    "2": [
Dec  3 18:48:46 compute-0 fervent_euler[428337]:        {
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "devices": [
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "/dev/loop5"
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            ],
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_name": "ceph_lv2",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_size": "21470642176",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "name": "ceph_lv2",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "tags": {
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.cluster_name": "ceph",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.crush_device_class": "",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.encrypted": "0",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.osd_id": "2",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.type": "block",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:                "ceph.vdo": "0"
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            },
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "type": "block",
Dec  3 18:48:46 compute-0 fervent_euler[428337]:            "vg_name": "ceph_vg2"
Dec  3 18:48:46 compute-0 fervent_euler[428337]:        }
Dec  3 18:48:46 compute-0 fervent_euler[428337]:    ]
Dec  3 18:48:46 compute-0 fervent_euler[428337]: }
Dec  3 18:48:46 compute-0 systemd[1]: libpod-ab31402137903b980b65528e47170deb009c13e695efcb5f1bb84e1630c617f7.scope: Deactivated successfully.
Dec  3 18:48:46 compute-0 podman[428319]: 2025-12-03 18:48:46.919413887 +0000 UTC m=+0.975688497 container died ab31402137903b980b65528e47170deb009c13e695efcb5f1bb84e1630c617f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 18:48:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-de4882c80d13f3e3324637e1bad10acb81d9965522397fa67804f1a8b1ccc1bf-merged.mount: Deactivated successfully.
Dec  3 18:48:46 compute-0 podman[428319]: 2025-12-03 18:48:46.990314882 +0000 UTC m=+1.046589472 container remove ab31402137903b980b65528e47170deb009c13e695efcb5f1bb84e1630c617f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:48:47 compute-0 systemd[1]: libpod-conmon-ab31402137903b980b65528e47170deb009c13e695efcb5f1bb84e1630c617f7.scope: Deactivated successfully.
Dec  3 18:48:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:47 compute-0 nova_compute[348325]: 2025-12-03 18:48:47.953 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:48 compute-0 podman[428534]: 2025-12-03 18:48:48.042687166 +0000 UTC m=+0.073700566 container create 2e6769085bf3a3ace3386169e31100e3f6e3a78cafd234791c8b7f70a072a9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:48:48 compute-0 systemd[1]: Started libpod-conmon-2e6769085bf3a3ace3386169e31100e3f6e3a78cafd234791c8b7f70a072a9d3.scope.
Dec  3 18:48:48 compute-0 podman[428534]: 2025-12-03 18:48:48.014552127 +0000 UTC m=+0.045565577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:48:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:48:48 compute-0 podman[428534]: 2025-12-03 18:48:48.157251671 +0000 UTC m=+0.188265151 container init 2e6769085bf3a3ace3386169e31100e3f6e3a78cafd234791c8b7f70a072a9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:48:48 compute-0 podman[428534]: 2025-12-03 18:48:48.171382406 +0000 UTC m=+0.202395836 container start 2e6769085bf3a3ace3386169e31100e3f6e3a78cafd234791c8b7f70a072a9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:48:48 compute-0 podman[428534]: 2025-12-03 18:48:48.178789807 +0000 UTC m=+0.209803207 container attach 2e6769085bf3a3ace3386169e31100e3f6e3a78cafd234791c8b7f70a072a9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:48:48 compute-0 hungry_wright[428550]: 167 167
Dec  3 18:48:48 compute-0 systemd[1]: libpod-2e6769085bf3a3ace3386169e31100e3f6e3a78cafd234791c8b7f70a072a9d3.scope: Deactivated successfully.
Dec  3 18:48:48 compute-0 podman[428534]: 2025-12-03 18:48:48.182174171 +0000 UTC m=+0.213187601 container died 2e6769085bf3a3ace3386169e31100e3f6e3a78cafd234791c8b7f70a072a9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:48:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b68a050707e2d28c25846f78db43b18f04b0a5d4e16ec7c263ebf52be113d4e9-merged.mount: Deactivated successfully.
Dec  3 18:48:48 compute-0 podman[428534]: 2025-12-03 18:48:48.256124211 +0000 UTC m=+0.287137611 container remove 2e6769085bf3a3ace3386169e31100e3f6e3a78cafd234791c8b7f70a072a9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:48:48 compute-0 systemd[1]: libpod-conmon-2e6769085bf3a3ace3386169e31100e3f6e3a78cafd234791c8b7f70a072a9d3.scope: Deactivated successfully.
Dec  3 18:48:48 compute-0 podman[428576]: 2025-12-03 18:48:48.510601881 +0000 UTC m=+0.079472326 container create 901492ac305babf934ddf4de7294803ef84392fcab1624a1465d24d5d4db9772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:48:48 compute-0 podman[428576]: 2025-12-03 18:48:48.473206486 +0000 UTC m=+0.042076971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:48:48 compute-0 systemd[1]: Started libpod-conmon-901492ac305babf934ddf4de7294803ef84392fcab1624a1465d24d5d4db9772.scope.
Dec  3 18:48:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:48:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a5874fce1c013bdd7c89d128339d6d261b5b7216d3fd3667afce6e6b522a5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a5874fce1c013bdd7c89d128339d6d261b5b7216d3fd3667afce6e6b522a5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a5874fce1c013bdd7c89d128339d6d261b5b7216d3fd3667afce6e6b522a5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a5874fce1c013bdd7c89d128339d6d261b5b7216d3fd3667afce6e6b522a5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:48:48 compute-0 podman[428576]: 2025-12-03 18:48:48.647649176 +0000 UTC m=+0.216519671 container init 901492ac305babf934ddf4de7294803ef84392fcab1624a1465d24d5d4db9772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:48:48 compute-0 podman[428576]: 2025-12-03 18:48:48.661872494 +0000 UTC m=+0.230742939 container start 901492ac305babf934ddf4de7294803ef84392fcab1624a1465d24d5d4db9772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 18:48:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:48 compute-0 podman[428576]: 2025-12-03 18:48:48.666036436 +0000 UTC m=+0.234906931 container attach 901492ac305babf934ddf4de7294803ef84392fcab1624a1465d24d5d4db9772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:48:48 compute-0 ovn_controller[89305]: 2025-12-03T18:48:48Z|00053|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  3 18:48:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:49 compute-0 nova_compute[348325]: 2025-12-03 18:48:49.623 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]: {
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "osd_id": 1,
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "type": "bluestore"
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:    },
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "osd_id": 2,
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "type": "bluestore"
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:    },
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "osd_id": 0,
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:        "type": "bluestore"
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]:    }
Dec  3 18:48:49 compute-0 suspicious_ellis[428591]: }
Dec  3 18:48:49 compute-0 systemd[1]: libpod-901492ac305babf934ddf4de7294803ef84392fcab1624a1465d24d5d4db9772.scope: Deactivated successfully.
Dec  3 18:48:49 compute-0 systemd[1]: libpod-901492ac305babf934ddf4de7294803ef84392fcab1624a1465d24d5d4db9772.scope: Consumed 1.097s CPU time.
Dec  3 18:48:49 compute-0 podman[428576]: 2025-12-03 18:48:49.772432032 +0000 UTC m=+1.341302487 container died 901492ac305babf934ddf4de7294803ef84392fcab1624a1465d24d5d4db9772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 18:48:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-54a5874fce1c013bdd7c89d128339d6d261b5b7216d3fd3667afce6e6b522a5b-merged.mount: Deactivated successfully.
Dec  3 18:48:49 compute-0 podman[428576]: 2025-12-03 18:48:49.842008825 +0000 UTC m=+1.410879270 container remove 901492ac305babf934ddf4de7294803ef84392fcab1624a1465d24d5d4db9772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 18:48:49 compute-0 systemd[1]: libpod-conmon-901492ac305babf934ddf4de7294803ef84392fcab1624a1465d24d5d4db9772.scope: Deactivated successfully.
Dec  3 18:48:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:48:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:48:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:48:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:48:49 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e36f8436-02b5-4b88-bb06-aa11c341b79c does not exist
Dec  3 18:48:49 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9f3583a5-1803-4503-bc4c-6f3cf5add4c4 does not exist
Dec  3 18:48:50 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:48:50 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:48:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:52 compute-0 nova_compute[348325]: 2025-12-03 18:48:52.955 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:54 compute-0 nova_compute[348325]: 2025-12-03 18:48:54.625 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:57 compute-0 nova_compute[348325]: 2025-12-03 18:48:57.958 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:48:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:48:59 compute-0 nova_compute[348325]: 2025-12-03 18:48:59.629 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:48:59 compute-0 podman[158200]: time="2025-12-03T18:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:48:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:48:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8646 "" "Go-http-client/1.1"
Dec  3 18:49:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:01 compute-0 openstack_network_exporter[365222]: ERROR   18:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:49:01 compute-0 openstack_network_exporter[365222]: ERROR   18:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:49:01 compute-0 openstack_network_exporter[365222]: ERROR   18:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:49:01 compute-0 openstack_network_exporter[365222]: ERROR   18:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:49:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:49:01 compute-0 openstack_network_exporter[365222]: ERROR   18:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:49:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:49:01 compute-0 podman[428689]: 2025-12-03 18:49:01.918088225 +0000 UTC m=+0.082068680 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:49:01 compute-0 podman[428688]: 2025-12-03 18:49:01.932313543 +0000 UTC m=+0.096076382 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  3 18:49:01 compute-0 podman[428690]: 2025-12-03 18:49:01.959597852 +0000 UTC m=+0.117034147 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 18:49:02 compute-0 nova_compute[348325]: 2025-12-03 18:49:02.963 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:03 compute-0 podman[428747]: 2025-12-03 18:49:03.951810524 +0000 UTC m=+0.110472076 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, architecture=x86_64, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, version=9.4)
Dec  3 18:49:03 compute-0 podman[428748]: 2025-12-03 18:49:03.953806013 +0000 UTC m=+0.103476495 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 18:49:03 compute-0 podman[428749]: 2025-12-03 18:49:03.957376921 +0000 UTC m=+0.093082331 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:49:04 compute-0 nova_compute[348325]: 2025-12-03 18:49:04.634 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:05 compute-0 nova_compute[348325]: 2025-12-03 18:49:05.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:05 compute-0 nova_compute[348325]: 2025-12-03 18:49:05.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 18:49:05 compute-0 nova_compute[348325]: 2025-12-03 18:49:05.538 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 18:49:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:07 compute-0 nova_compute[348325]: 2025-12-03 18:49:07.529 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:07 compute-0 nova_compute[348325]: 2025-12-03 18:49:07.966 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:09 compute-0 nova_compute[348325]: 2025-12-03 18:49:09.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:09 compute-0 nova_compute[348325]: 2025-12-03 18:49:09.639 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:10 compute-0 nova_compute[348325]: 2025-12-03 18:49:10.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:11 compute-0 nova_compute[348325]: 2025-12-03 18:49:11.500 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:11 compute-0 nova_compute[348325]: 2025-12-03 18:49:11.501 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:12 compute-0 nova_compute[348325]: 2025-12-03 18:49:12.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:12 compute-0 nova_compute[348325]: 2025-12-03 18:49:12.968 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.250 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.251 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.265 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1ca1fbdb-089c-4544-821e-0542089b8424', 'name': 'test_0', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.271 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6019a9c-c065-49d8-bef3-219bd2c79d8c', 'name': 'vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.277 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'de3992c5-c1ad-4da3-9276-954d6365c3c9', 'name': 'vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.278 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.278 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.278 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.279 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T18:49:13.278921) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.285 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.291 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.296 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.297 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.298 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T18:49:13.298430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.299 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.299 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.300 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.300 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.301 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T18:49:13.301301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.302 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.302 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.304 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T18:49:13.304191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.304 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.305 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.306 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.307 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes volume: 2262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T18:49:13.307626) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.308 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.308 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.309 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.309 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T18:49:13.310316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.330 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.330 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.330 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.354 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.355 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.355 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.377 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.378 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.378 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.379 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.379 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.379 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.380 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T18:49:13.380170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.381 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.381 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.382 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T18:49:13.382743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.400 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.417 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.434 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/memory.usage volume: 48.95703125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.434 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T18:49:13.435042) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.435 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.435 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.435 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.436 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.436 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.436 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.437 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.437 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.437 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.437 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.437 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T18:49:13.436686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.438 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.438 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.438 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.439 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.439 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.439 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.439 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T18:49:13.439324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 nova_compute[348325]: 2025-12-03 18:49:13.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:13 compute-0 nova_compute[348325]: 2025-12-03 18:49:13.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.501 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.502 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.502 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.575 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.576 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.576 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.628 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.628 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.628 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.629 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.630 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.630 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 1682579508 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.630 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 260360075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.630 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 147233249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.630 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 1330892351 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.630 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 190600353 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.631 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 156629474 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.631 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 1270610173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.631 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 182054323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.631 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T18:49:13.629973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.631 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.latency volume: 131449970 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.632 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.632 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.633 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T18:49:13.632819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.633 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.633 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.633 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.633 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.634 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.634 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.634 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.634 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.635 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.635 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.635 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T18:49:13.635684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.635 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.636 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.637 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.637 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.637 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.637 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.637 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.638 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.638 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.638 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.639 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.639 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.639 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.639 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T18:49:13.638895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.640 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.640 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.640 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.640 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.641 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.641 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.641 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T18:49:13.641381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.642 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.642 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.642 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.642 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.642 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.643 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 6303799002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T18:49:13.642900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.643 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 23959545 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.643 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.643 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 6164702929 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.643 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 24431067 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.644 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.644 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 6661882048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.644 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 21269890 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.644 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.645 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.645 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.645 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.645 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.646 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.646 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.646 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.646 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T18:49:13.645409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.647 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.647 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.647 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.648 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.648 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T18:49:13.648085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.648 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.648 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.649 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.649 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.650 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T18:49:13.649349) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.650 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.650 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.650 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.650 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.650 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.651 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.652 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.652 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T18:49:13.650325) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.652 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T18:49:13.651664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.653 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T18:49:13.652744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.653 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.653 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.653 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.654 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/cpu volume: 44340000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.654 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/cpu volume: 40320000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.654 14 DEBUG ceilometer.compute.pollsters [-] de3992c5-c1ad-4da3-9276-954d6365c3c9/cpu volume: 40740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.654 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.656 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T18:49:13.654135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:49:13.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:49:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:49:13
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['images', '.rgw.root', 'vms', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:49:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:49:14 compute-0 nova_compute[348325]: 2025-12-03 18:49:14.137 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:49:14 compute-0 nova_compute[348325]: 2025-12-03 18:49:14.137 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:49:14 compute-0 nova_compute[348325]: 2025-12-03 18:49:14.137 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:49:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:49:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:49:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:49:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:49:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:49:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:49:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:49:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:49:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:49:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:49:14 compute-0 nova_compute[348325]: 2025-12-03 18:49:14.644 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:14 compute-0 podman[428802]: 2025-12-03 18:49:14.89677992 +0000 UTC m=+0.071070311 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:49:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:15 compute-0 nova_compute[348325]: 2025-12-03 18:49:15.822 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Updating instance_info_cache with network_info: [{"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:49:16 compute-0 nova_compute[348325]: 2025-12-03 18:49:16.385 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:49:16 compute-0 nova_compute[348325]: 2025-12-03 18:49:16.385 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:49:16 compute-0 nova_compute[348325]: 2025-12-03 18:49:16.386 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:16 compute-0 nova_compute[348325]: 2025-12-03 18:49:16.387 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:16 compute-0 nova_compute[348325]: 2025-12-03 18:49:16.387 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:49:16 compute-0 podman[428828]: 2025-12-03 18:49:16.944315556 +0000 UTC m=+0.086781195 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  3 18:49:16 compute-0 podman[428827]: 2025-12-03 18:49:16.973337597 +0000 UTC m=+0.131374767 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true)
Dec  3 18:49:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:17 compute-0 nova_compute[348325]: 2025-12-03 18:49:17.970 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:19 compute-0 nova_compute[348325]: 2025-12-03 18:49:19.647 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:21 compute-0 nova_compute[348325]: 2025-12-03 18:49:21.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:21 compute-0 nova_compute[348325]: 2025-12-03 18:49:21.578 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:49:21 compute-0 nova_compute[348325]: 2025-12-03 18:49:21.578 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:49:21 compute-0 nova_compute[348325]: 2025-12-03 18:49:21.579 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:49:21 compute-0 nova_compute[348325]: 2025-12-03 18:49:21.579 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:49:21 compute-0 nova_compute[348325]: 2025-12-03 18:49:21.579 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:49:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:49:22 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131213948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:49:22 compute-0 nova_compute[348325]: 2025-12-03 18:49:22.073 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:49:22 compute-0 nova_compute[348325]: 2025-12-03 18:49:22.744 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:49:22 compute-0 nova_compute[348325]: 2025-12-03 18:49:22.744 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:49:22 compute-0 nova_compute[348325]: 2025-12-03 18:49:22.745 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:49:22 compute-0 nova_compute[348325]: 2025-12-03 18:49:22.751 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:49:22 compute-0 nova_compute[348325]: 2025-12-03 18:49:22.751 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:49:22 compute-0 nova_compute[348325]: 2025-12-03 18:49:22.752 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:49:22 compute-0 nova_compute[348325]: 2025-12-03 18:49:22.759 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:49:22 compute-0 nova_compute[348325]: 2025-12-03 18:49:22.759 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:49:22 compute-0 nova_compute[348325]: 2025-12-03 18:49:22.759 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:49:22 compute-0 nova_compute[348325]: 2025-12-03 18:49:22.972 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:23 compute-0 nova_compute[348325]: 2025-12-03 18:49:23.224 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:49:23 compute-0 nova_compute[348325]: 2025-12-03 18:49:23.226 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3413MB free_disk=59.88887023925781GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:49:23 compute-0 nova_compute[348325]: 2025-12-03 18:49:23.226 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:49:23 compute-0 nova_compute[348325]: 2025-12-03 18:49:23.226 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:49:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:49:23.345 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:49:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:49:23.346 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:49:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:49:23.346 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:49:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:23 compute-0 nova_compute[348325]: 2025-12-03 18:49:23.463 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:49:23 compute-0 nova_compute[348325]: 2025-12-03 18:49:23.464 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance de3992c5-c1ad-4da3-9276-954d6365c3c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:49:23 compute-0 nova_compute[348325]: 2025-12-03 18:49:23.464 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a6019a9c-c065-49d8-bef3-219bd2c79d8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:49:23 compute-0 nova_compute[348325]: 2025-12-03 18:49:23.464 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:49:23 compute-0 nova_compute[348325]: 2025-12-03 18:49:23.464 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:49:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:23 compute-0 nova_compute[348325]: 2025-12-03 18:49:23.690 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:49:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:49:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3887057965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:49:24 compute-0 nova_compute[348325]: 2025-12-03 18:49:24.121 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:49:24 compute-0 nova_compute[348325]: 2025-12-03 18:49:24.131 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:49:24 compute-0 nova_compute[348325]: 2025-12-03 18:49:24.152 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:49:24 compute-0 nova_compute[348325]: 2025-12-03 18:49:24.154 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:49:24 compute-0 nova_compute[348325]: 2025-12-03 18:49:24.155 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:49:24 compute-0 nova_compute[348325]: 2025-12-03 18:49:24.155 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:49:24 compute-0 nova_compute[348325]: 2025-12-03 18:49:24.156 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016571738458032168 of space, bias 1.0, pg target 0.49715215374096505 quantized to 32 (current 32)
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:49:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:49:24 compute-0 nova_compute[348325]: 2025-12-03 18:49:24.653 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:27 compute-0 nova_compute[348325]: 2025-12-03 18:49:27.977 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:29 compute-0 nova_compute[348325]: 2025-12-03 18:49:29.658 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:29 compute-0 podman[158200]: time="2025-12-03T18:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:49:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:49:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8652 "" "Go-http-client/1.1"
Dec  3 18:49:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:31 compute-0 openstack_network_exporter[365222]: ERROR   18:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:49:31 compute-0 openstack_network_exporter[365222]: ERROR   18:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:49:31 compute-0 openstack_network_exporter[365222]: ERROR   18:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:49:31 compute-0 openstack_network_exporter[365222]: ERROR   18:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:49:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:49:31 compute-0 openstack_network_exporter[365222]: ERROR   18:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:49:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:49:32 compute-0 podman[428916]: 2025-12-03 18:49:32.941258337 +0000 UTC m=+0.104329664 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 18:49:32 compute-0 podman[428917]: 2025-12-03 18:49:32.94501523 +0000 UTC m=+0.102955972 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:49:32 compute-0 podman[428918]: 2025-12-03 18:49:32.965270595 +0000 UTC m=+0.105521004 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, config_id=edpm, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Dec  3 18:49:32 compute-0 nova_compute[348325]: 2025-12-03 18:49:32.979 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:34 compute-0 nova_compute[348325]: 2025-12-03 18:49:34.663 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:34 compute-0 podman[428975]: 2025-12-03 18:49:34.908483927 +0000 UTC m=+0.071565962 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, version=9.4, container_name=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec  3 18:49:34 compute-0 podman[428977]: 2025-12-03 18:49:34.916293959 +0000 UTC m=+0.071866310 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:49:34 compute-0 podman[428976]: 2025-12-03 18:49:34.932340491 +0000 UTC m=+0.079542547 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 18:49:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:49:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/187191286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:49:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:49:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/187191286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:49:37 compute-0 nova_compute[348325]: 2025-12-03 18:49:37.982 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:39 compute-0 nova_compute[348325]: 2025-12-03 18:49:39.668 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:42 compute-0 nova_compute[348325]: 2025-12-03 18:49:42.986 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:49:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:49:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:49:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:49:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:49:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:49:44 compute-0 nova_compute[348325]: 2025-12-03 18:49:44.671 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:45 compute-0 podman[429027]: 2025-12-03 18:49:45.957438067 +0000 UTC m=+0.104012258 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:49:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:47 compute-0 podman[429052]: 2025-12-03 18:49:47.935021235 +0000 UTC m=+0.092645309 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 18:49:47 compute-0 podman[429051]: 2025-12-03 18:49:47.964572148 +0000 UTC m=+0.126432866 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec  3 18:49:47 compute-0 nova_compute[348325]: 2025-12-03 18:49:47.988 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:49 compute-0 nova_compute[348325]: 2025-12-03 18:49:49.674 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:51 compute-0 podman[429266]: 2025-12-03 18:49:51.068180449 +0000 UTC m=+0.089155974 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:49:51 compute-0 podman[429266]: 2025-12-03 18:49:51.188505294 +0000 UTC m=+0.209480819 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:49:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:49:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:49:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:49:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:49:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:49:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:49:52 compute-0 nova_compute[348325]: 2025-12-03 18:49:52.990 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:49:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:49:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:49:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:49:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:49:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:49:53 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 781d1ea3-a459-492f-a9d1-c98ae323683f does not exist
Dec  3 18:49:53 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6dfbdf55-efd6-4392-994c-e293a334a1d9 does not exist
Dec  3 18:49:53 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 216fb82f-bec7-4d2a-9ae6-41e8babe747e does not exist
Dec  3 18:49:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:49:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:49:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:49:53 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:49:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:49:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:49:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:49:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:49:53 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:49:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:53 compute-0 podman[429688]: 2025-12-03 18:49:53.991343061 +0000 UTC m=+0.063237470 container create 13d952655050e715c17ca7dde1c05e827e8703feeba5ce7f02fb56b2424584e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:49:54 compute-0 podman[429688]: 2025-12-03 18:49:53.962391882 +0000 UTC m=+0.034286291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:49:54 compute-0 systemd[1]: Started libpod-conmon-13d952655050e715c17ca7dde1c05e827e8703feeba5ce7f02fb56b2424584e6.scope.
Dec  3 18:49:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:49:54 compute-0 podman[429688]: 2025-12-03 18:49:54.146185761 +0000 UTC m=+0.218080180 container init 13d952655050e715c17ca7dde1c05e827e8703feeba5ce7f02fb56b2424584e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:49:54 compute-0 podman[429688]: 2025-12-03 18:49:54.155085639 +0000 UTC m=+0.226980008 container start 13d952655050e715c17ca7dde1c05e827e8703feeba5ce7f02fb56b2424584e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:49:54 compute-0 podman[429688]: 2025-12-03 18:49:54.160147603 +0000 UTC m=+0.232042032 container attach 13d952655050e715c17ca7dde1c05e827e8703feeba5ce7f02fb56b2424584e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 18:49:54 compute-0 relaxed_ishizaka[429704]: 167 167
Dec  3 18:49:54 compute-0 systemd[1]: libpod-13d952655050e715c17ca7dde1c05e827e8703feeba5ce7f02fb56b2424584e6.scope: Deactivated successfully.
Dec  3 18:49:54 compute-0 podman[429688]: 2025-12-03 18:49:54.165269699 +0000 UTC m=+0.237164108 container died 13d952655050e715c17ca7dde1c05e827e8703feeba5ce7f02fb56b2424584e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:49:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb1e1b5783a410302152aeb15496c48e4545fa9a651e6b9058c8f90891a14314-merged.mount: Deactivated successfully.
Dec  3 18:49:54 compute-0 podman[429688]: 2025-12-03 18:49:54.275941128 +0000 UTC m=+0.347835497 container remove 13d952655050e715c17ca7dde1c05e827e8703feeba5ce7f02fb56b2424584e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:49:54 compute-0 systemd[1]: libpod-conmon-13d952655050e715c17ca7dde1c05e827e8703feeba5ce7f02fb56b2424584e6.scope: Deactivated successfully.
Dec  3 18:49:54 compute-0 podman[429727]: 2025-12-03 18:49:54.481925751 +0000 UTC m=+0.060757818 container create 09e2de295fe3afe372fd7ac8776704fa96491ba2bfc075fa11f08a0195db0db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:49:54 compute-0 systemd[1]: Started libpod-conmon-09e2de295fe3afe372fd7ac8776704fa96491ba2bfc075fa11f08a0195db0db7.scope.
Dec  3 18:49:54 compute-0 podman[429727]: 2025-12-03 18:49:54.459159384 +0000 UTC m=+0.037991461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:49:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:49:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513c65bb4f63a13c0cc1ba9b1647aa7ddd42a86ff8fabf756f0801839db31883/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513c65bb4f63a13c0cc1ba9b1647aa7ddd42a86ff8fabf756f0801839db31883/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513c65bb4f63a13c0cc1ba9b1647aa7ddd42a86ff8fabf756f0801839db31883/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513c65bb4f63a13c0cc1ba9b1647aa7ddd42a86ff8fabf756f0801839db31883/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513c65bb4f63a13c0cc1ba9b1647aa7ddd42a86ff8fabf756f0801839db31883/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:54 compute-0 podman[429727]: 2025-12-03 18:49:54.586137941 +0000 UTC m=+0.164970038 container init 09e2de295fe3afe372fd7ac8776704fa96491ba2bfc075fa11f08a0195db0db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:49:54 compute-0 podman[429727]: 2025-12-03 18:49:54.599064359 +0000 UTC m=+0.177896426 container start 09e2de295fe3afe372fd7ac8776704fa96491ba2bfc075fa11f08a0195db0db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:49:54 compute-0 podman[429727]: 2025-12-03 18:49:54.60688313 +0000 UTC m=+0.185715197 container attach 09e2de295fe3afe372fd7ac8776704fa96491ba2bfc075fa11f08a0195db0db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:49:54 compute-0 nova_compute[348325]: 2025-12-03 18:49:54.679 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:55 compute-0 trusting_bartik[429743]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:49:55 compute-0 trusting_bartik[429743]: --> relative data size: 1.0
Dec  3 18:49:55 compute-0 trusting_bartik[429743]: --> All data devices are unavailable
Dec  3 18:49:55 compute-0 systemd[1]: libpod-09e2de295fe3afe372fd7ac8776704fa96491ba2bfc075fa11f08a0195db0db7.scope: Deactivated successfully.
Dec  3 18:49:55 compute-0 systemd[1]: libpod-09e2de295fe3afe372fd7ac8776704fa96491ba2bfc075fa11f08a0195db0db7.scope: Consumed 1.052s CPU time.
Dec  3 18:49:55 compute-0 podman[429772]: 2025-12-03 18:49:55.796330109 +0000 UTC m=+0.036957367 container died 09e2de295fe3afe372fd7ac8776704fa96491ba2bfc075fa11f08a0195db0db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:49:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-513c65bb4f63a13c0cc1ba9b1647aa7ddd42a86ff8fabf756f0801839db31883-merged.mount: Deactivated successfully.
Dec  3 18:49:55 compute-0 podman[429772]: 2025-12-03 18:49:55.868240009 +0000 UTC m=+0.108867247 container remove 09e2de295fe3afe372fd7ac8776704fa96491ba2bfc075fa11f08a0195db0db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:49:55 compute-0 systemd[1]: libpod-conmon-09e2de295fe3afe372fd7ac8776704fa96491ba2bfc075fa11f08a0195db0db7.scope: Deactivated successfully.
Dec  3 18:49:56 compute-0 podman[429928]: 2025-12-03 18:49:56.669475133 +0000 UTC m=+0.053099891 container create 0e4bfdeaef4668ce4e5cdb0b47b93d86a9b012d3fabd75b4e7533bcb8218e8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:49:56 compute-0 podman[429928]: 2025-12-03 18:49:56.647799923 +0000 UTC m=+0.031424691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:49:56 compute-0 systemd[1]: Started libpod-conmon-0e4bfdeaef4668ce4e5cdb0b47b93d86a9b012d3fabd75b4e7533bcb8218e8fa.scope.
Dec  3 18:49:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:49:56 compute-0 podman[429928]: 2025-12-03 18:49:56.846868787 +0000 UTC m=+0.230493585 container init 0e4bfdeaef4668ce4e5cdb0b47b93d86a9b012d3fabd75b4e7533bcb8218e8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:49:56 compute-0 podman[429928]: 2025-12-03 18:49:56.8568033 +0000 UTC m=+0.240428058 container start 0e4bfdeaef4668ce4e5cdb0b47b93d86a9b012d3fabd75b4e7533bcb8218e8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:49:56 compute-0 podman[429928]: 2025-12-03 18:49:56.861623828 +0000 UTC m=+0.245248636 container attach 0e4bfdeaef4668ce4e5cdb0b47b93d86a9b012d3fabd75b4e7533bcb8218e8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:49:56 compute-0 sweet_mcclintock[429943]: 167 167
Dec  3 18:49:56 compute-0 systemd[1]: libpod-0e4bfdeaef4668ce4e5cdb0b47b93d86a9b012d3fabd75b4e7533bcb8218e8fa.scope: Deactivated successfully.
Dec  3 18:49:56 compute-0 podman[429928]: 2025-12-03 18:49:56.867157253 +0000 UTC m=+0.250782051 container died 0e4bfdeaef4668ce4e5cdb0b47b93d86a9b012d3fabd75b4e7533bcb8218e8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:49:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c448f2c29b0bbb9652870a010fe02bb33d030f305b3f446871f024fa27e26a53-merged.mount: Deactivated successfully.
Dec  3 18:49:56 compute-0 podman[429928]: 2025-12-03 18:49:56.93811903 +0000 UTC m=+0.321743788 container remove 0e4bfdeaef4668ce4e5cdb0b47b93d86a9b012d3fabd75b4e7533bcb8218e8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 18:49:56 compute-0 systemd[1]: libpod-conmon-0e4bfdeaef4668ce4e5cdb0b47b93d86a9b012d3fabd75b4e7533bcb8218e8fa.scope: Deactivated successfully.
Dec  3 18:49:57 compute-0 podman[429966]: 2025-12-03 18:49:57.145875487 +0000 UTC m=+0.057588402 container create abb5bf1bdbb60efe8acfd90b75957acf14b6c97774e057c57d4962e79f59b6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:49:57 compute-0 systemd[1]: Started libpod-conmon-abb5bf1bdbb60efe8acfd90b75957acf14b6c97774e057c57d4962e79f59b6c0.scope.
Dec  3 18:49:57 compute-0 podman[429966]: 2025-12-03 18:49:57.121326555 +0000 UTC m=+0.033039520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:49:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c269d92e4fe15c107ba8400b614612a48637193e880806556fd42b5a9d14038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c269d92e4fe15c107ba8400b614612a48637193e880806556fd42b5a9d14038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c269d92e4fe15c107ba8400b614612a48637193e880806556fd42b5a9d14038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c269d92e4fe15c107ba8400b614612a48637193e880806556fd42b5a9d14038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:57 compute-0 podman[429966]: 2025-12-03 18:49:57.28041617 +0000 UTC m=+0.192129105 container init abb5bf1bdbb60efe8acfd90b75957acf14b6c97774e057c57d4962e79f59b6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:49:57 compute-0 podman[429966]: 2025-12-03 18:49:57.293232454 +0000 UTC m=+0.204945369 container start abb5bf1bdbb60efe8acfd90b75957acf14b6c97774e057c57d4962e79f59b6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:49:57 compute-0 podman[429966]: 2025-12-03 18:49:57.298085252 +0000 UTC m=+0.209798167 container attach abb5bf1bdbb60efe8acfd90b75957acf14b6c97774e057c57d4962e79f59b6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:49:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:57 compute-0 nova_compute[348325]: 2025-12-03 18:49:57.992 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:58 compute-0 musing_chatelet[429982]: {
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:    "0": [
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:        {
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "devices": [
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "/dev/loop3"
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            ],
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_name": "ceph_lv0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_size": "21470642176",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "name": "ceph_lv0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "tags": {
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.cluster_name": "ceph",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.crush_device_class": "",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.encrypted": "0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.osd_id": "0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.type": "block",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.vdo": "0"
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            },
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "type": "block",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "vg_name": "ceph_vg0"
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:        }
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:    ],
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:    "1": [
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:        {
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "devices": [
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "/dev/loop4"
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            ],
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_name": "ceph_lv1",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_size": "21470642176",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "name": "ceph_lv1",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "tags": {
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.cluster_name": "ceph",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.crush_device_class": "",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.encrypted": "0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.osd_id": "1",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.type": "block",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.vdo": "0"
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            },
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "type": "block",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "vg_name": "ceph_vg1"
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:        }
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:    ],
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:    "2": [
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:        {
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "devices": [
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "/dev/loop5"
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            ],
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_name": "ceph_lv2",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_size": "21470642176",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "name": "ceph_lv2",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "tags": {
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.cluster_name": "ceph",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.crush_device_class": "",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.encrypted": "0",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.osd_id": "2",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.type": "block",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:                "ceph.vdo": "0"
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            },
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "type": "block",
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:            "vg_name": "ceph_vg2"
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:        }
Dec  3 18:49:58 compute-0 musing_chatelet[429982]:    ]
Dec  3 18:49:58 compute-0 musing_chatelet[429982]: }
Dec  3 18:49:58 compute-0 systemd[1]: libpod-abb5bf1bdbb60efe8acfd90b75957acf14b6c97774e057c57d4962e79f59b6c0.scope: Deactivated successfully.
Dec  3 18:49:58 compute-0 podman[429966]: 2025-12-03 18:49:58.086417881 +0000 UTC m=+0.998130806 container died abb5bf1bdbb60efe8acfd90b75957acf14b6c97774e057c57d4962e79f59b6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 18:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c269d92e4fe15c107ba8400b614612a48637193e880806556fd42b5a9d14038-merged.mount: Deactivated successfully.
Dec  3 18:49:58 compute-0 podman[429966]: 2025-12-03 18:49:58.163597971 +0000 UTC m=+1.075310886 container remove abb5bf1bdbb60efe8acfd90b75957acf14b6c97774e057c57d4962e79f59b6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 18:49:58 compute-0 systemd[1]: libpod-conmon-abb5bf1bdbb60efe8acfd90b75957acf14b6c97774e057c57d4962e79f59b6c0.scope: Deactivated successfully.
Dec  3 18:49:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:49:58 compute-0 podman[430141]: 2025-12-03 18:49:58.954841252 +0000 UTC m=+0.062912381 container create 124b20e126372d63080ec7d1aef6ccf99b13726b7d3b51b8c9f26deb56134a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamport, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:49:58 compute-0 systemd[1]: Started libpod-conmon-124b20e126372d63080ec7d1aef6ccf99b13726b7d3b51b8c9f26deb56134a65.scope.
Dec  3 18:49:59 compute-0 podman[430141]: 2025-12-03 18:49:58.932518835 +0000 UTC m=+0.040589994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:49:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:49:59 compute-0 podman[430141]: 2025-12-03 18:49:59.04586578 +0000 UTC m=+0.153936929 container init 124b20e126372d63080ec7d1aef6ccf99b13726b7d3b51b8c9f26deb56134a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 18:49:59 compute-0 podman[430141]: 2025-12-03 18:49:59.055877445 +0000 UTC m=+0.163948574 container start 124b20e126372d63080ec7d1aef6ccf99b13726b7d3b51b8c9f26deb56134a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:49:59 compute-0 podman[430141]: 2025-12-03 18:49:59.060352055 +0000 UTC m=+0.168423214 container attach 124b20e126372d63080ec7d1aef6ccf99b13726b7d3b51b8c9f26deb56134a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamport, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:49:59 compute-0 nostalgic_lamport[430157]: 167 167
Dec  3 18:49:59 compute-0 systemd[1]: libpod-124b20e126372d63080ec7d1aef6ccf99b13726b7d3b51b8c9f26deb56134a65.scope: Deactivated successfully.
Dec  3 18:49:59 compute-0 podman[430141]: 2025-12-03 18:49:59.063374008 +0000 UTC m=+0.171445157 container died 124b20e126372d63080ec7d1aef6ccf99b13726b7d3b51b8c9f26deb56134a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Dec  3 18:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b315427251b29e96b98d4568db080579054bc2835821e2b74bdbc84fe67de889-merged.mount: Deactivated successfully.
Dec  3 18:49:59 compute-0 podman[430141]: 2025-12-03 18:49:59.110123062 +0000 UTC m=+0.218194191 container remove 124b20e126372d63080ec7d1aef6ccf99b13726b7d3b51b8c9f26deb56134a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamport, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 18:49:59 compute-0 systemd[1]: libpod-conmon-124b20e126372d63080ec7d1aef6ccf99b13726b7d3b51b8c9f26deb56134a65.scope: Deactivated successfully.
Dec  3 18:49:59 compute-0 podman[430181]: 2025-12-03 18:49:59.312803404 +0000 UTC m=+0.056967654 container create 6b83b5a2da758e3dc1999b2dcbefa061f26fcbd51674bc692cdf3a97fc41751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:49:59 compute-0 systemd[1]: Started libpod-conmon-6b83b5a2da758e3dc1999b2dcbefa061f26fcbd51674bc692cdf3a97fc41751b.scope.
Dec  3 18:49:59 compute-0 podman[430181]: 2025-12-03 18:49:59.287375682 +0000 UTC m=+0.031539962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:49:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c4a3a479f3e7f6da5fea85be1cd135f7abff7e6e6e3134d2ad46eda90fe8c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c4a3a479f3e7f6da5fea85be1cd135f7abff7e6e6e3134d2ad46eda90fe8c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c4a3a479f3e7f6da5fea85be1cd135f7abff7e6e6e3134d2ad46eda90fe8c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c4a3a479f3e7f6da5fea85be1cd135f7abff7e6e6e3134d2ad46eda90fe8c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:49:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:49:59 compute-0 podman[430181]: 2025-12-03 18:49:59.439002164 +0000 UTC m=+0.183166434 container init 6b83b5a2da758e3dc1999b2dcbefa061f26fcbd51674bc692cdf3a97fc41751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:49:59 compute-0 podman[430181]: 2025-12-03 18:49:59.456656316 +0000 UTC m=+0.200820566 container start 6b83b5a2da758e3dc1999b2dcbefa061f26fcbd51674bc692cdf3a97fc41751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:49:59 compute-0 podman[430181]: 2025-12-03 18:49:59.461096805 +0000 UTC m=+0.205261055 container attach 6b83b5a2da758e3dc1999b2dcbefa061f26fcbd51674bc692cdf3a97fc41751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:49:59 compute-0 nova_compute[348325]: 2025-12-03 18:49:59.682 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:49:59 compute-0 podman[158200]: time="2025-12-03T18:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:49:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45378 "" "Go-http-client/1.1"
Dec  3 18:49:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9052 "" "Go-http-client/1.1"
Dec  3 18:50:00 compute-0 lucid_liskov[430197]: {
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "osd_id": 1,
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "type": "bluestore"
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:    },
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "osd_id": 2,
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "type": "bluestore"
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:    },
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "osd_id": 0,
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:        "type": "bluestore"
Dec  3 18:50:00 compute-0 lucid_liskov[430197]:    }
Dec  3 18:50:00 compute-0 lucid_liskov[430197]: }
Dec  3 18:50:00 compute-0 systemd[1]: libpod-6b83b5a2da758e3dc1999b2dcbefa061f26fcbd51674bc692cdf3a97fc41751b.scope: Deactivated successfully.
Dec  3 18:50:00 compute-0 podman[430181]: 2025-12-03 18:50:00.506132768 +0000 UTC m=+1.250297018 container died 6b83b5a2da758e3dc1999b2dcbefa061f26fcbd51674bc692cdf3a97fc41751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 18:50:00 compute-0 systemd[1]: libpod-6b83b5a2da758e3dc1999b2dcbefa061f26fcbd51674bc692cdf3a97fc41751b.scope: Consumed 1.030s CPU time.
Dec  3 18:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-33c4a3a479f3e7f6da5fea85be1cd135f7abff7e6e6e3134d2ad46eda90fe8c5-merged.mount: Deactivated successfully.
Dec  3 18:50:00 compute-0 podman[430181]: 2025-12-03 18:50:00.586012634 +0000 UTC m=+1.330176884 container remove 6b83b5a2da758e3dc1999b2dcbefa061f26fcbd51674bc692cdf3a97fc41751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:50:00 compute-0 systemd[1]: libpod-conmon-6b83b5a2da758e3dc1999b2dcbefa061f26fcbd51674bc692cdf3a97fc41751b.scope: Deactivated successfully.
Dec  3 18:50:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:50:00 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:50:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:50:00 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:50:00 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d71af52e-7861-410c-8bec-9f94f9394934 does not exist
Dec  3 18:50:00 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e032d719-0d87-4764-a27b-ffd621dda109 does not exist
Dec  3 18:50:01 compute-0 openstack_network_exporter[365222]: ERROR   18:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:50:01 compute-0 openstack_network_exporter[365222]: ERROR   18:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:50:01 compute-0 openstack_network_exporter[365222]: ERROR   18:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:50:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:01 compute-0 openstack_network_exporter[365222]: ERROR   18:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:50:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:50:01 compute-0 openstack_network_exporter[365222]: ERROR   18:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:50:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:50:01 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:50:01 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:50:02 compute-0 nova_compute[348325]: 2025-12-03 18:50:02.994 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:03 compute-0 podman[430298]: 2025-12-03 18:50:03.931493096 +0000 UTC m=+0.094327620 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:50:03 compute-0 podman[430297]: 2025-12-03 18:50:03.945669783 +0000 UTC m=+0.110692511 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 18:50:03 compute-0 podman[430299]: 2025-12-03 18:50:03.961556832 +0000 UTC m=+0.115052578 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Dec  3 18:50:04 compute-0 nova_compute[348325]: 2025-12-03 18:50:04.686 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:05 compute-0 podman[430357]: 2025-12-03 18:50:05.953858586 +0000 UTC m=+0.097869497 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, container_name=kepler, vcs-type=git, version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec  3 18:50:05 compute-0 podman[430358]: 2025-12-03 18:50:05.956688045 +0000 UTC m=+0.096532544 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec  3 18:50:05 compute-0 podman[430359]: 2025-12-03 18:50:05.967285015 +0000 UTC m=+0.110014884 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 18:50:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:07 compute-0 nova_compute[348325]: 2025-12-03 18:50:07.997 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:09 compute-0 nova_compute[348325]: 2025-12-03 18:50:09.689 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:10 compute-0 nova_compute[348325]: 2025-12-03 18:50:10.164 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:50:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:11 compute-0 nova_compute[348325]: 2025-12-03 18:50:11.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:50:12 compute-0 nova_compute[348325]: 2025-12-03 18:50:12.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:50:13 compute-0 nova_compute[348325]: 2025-12-03 18:50:13.001 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:13 compute-0 nova_compute[348325]: 2025-12-03 18:50:13.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:50:13 compute-0 nova_compute[348325]: 2025-12-03 18:50:13.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:50:13 compute-0 nova_compute[348325]: 2025-12-03 18:50:13.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:50:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:50:13
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', '.mgr', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root']
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:50:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:50:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:50:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:50:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:50:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:50:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:50:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:50:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:50:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:50:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:50:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:50:14 compute-0 nova_compute[348325]: 2025-12-03 18:50:14.691 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:14 compute-0 nova_compute[348325]: 2025-12-03 18:50:14.894 348329 DEBUG oslo_concurrency.lockutils [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "de3992c5-c1ad-4da3-9276-954d6365c3c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:50:14 compute-0 nova_compute[348325]: 2025-12-03 18:50:14.895 348329 DEBUG oslo_concurrency.lockutils [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:50:14 compute-0 nova_compute[348325]: 2025-12-03 18:50:14.895 348329 DEBUG oslo_concurrency.lockutils [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:50:14 compute-0 nova_compute[348325]: 2025-12-03 18:50:14.896 348329 DEBUG oslo_concurrency.lockutils [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:50:14 compute-0 nova_compute[348325]: 2025-12-03 18:50:14.896 348329 DEBUG oslo_concurrency.lockutils [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:50:14 compute-0 nova_compute[348325]: 2025-12-03 18:50:14.898 348329 INFO nova.compute.manager [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Terminating instance#033[00m
Dec  3 18:50:14 compute-0 nova_compute[348325]: 2025-12-03 18:50:14.900 348329 DEBUG nova.compute.manager [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 18:50:15 compute-0 kernel: tapd2dfa631-e5 (unregistering): left promiscuous mode
Dec  3 18:50:15 compute-0 NetworkManager[49087]: <info>  [1764787815.0471] device (tapd2dfa631-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 18:50:15 compute-0 ovn_controller[89305]: 2025-12-03T18:50:15Z|00054|binding|INFO|Releasing lport d2dfa631-e553-46bc-bc20-3f0bdd977328 from this chassis (sb_readonly=0)
Dec  3 18:50:15 compute-0 ovn_controller[89305]: 2025-12-03T18:50:15Z|00055|binding|INFO|Setting lport d2dfa631-e553-46bc-bc20-3f0bdd977328 down in Southbound
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.058 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:15 compute-0 ovn_controller[89305]: 2025-12-03T18:50:15Z|00056|binding|INFO|Removing iface tapd2dfa631-e5 ovn-installed in OVS
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.066 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.083 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.096 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:73:73 192.168.0.212'], port_security=['fa:16:3e:e6:73:73 192.168.0.212'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-4jfpq66btob3-t73jgstwyk5c-ol75pntdsuyz-port-s2gk2jkwdast', 'neutron:cidrs': '192.168.0.212/24', 'neutron:device_id': 'de3992c5-c1ad-4da3-9276-954d6365c3c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-4jfpq66btob3-t73jgstwyk5c-ol75pntdsuyz-port-s2gk2jkwdast', 'neutron:project_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e48052e-a2fd-4fc1-8ebd-22e3b6e0bd66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12999ead-9a54-49b3-a532-a5f8bdddaf16, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=d2dfa631-e553-46bc-bc20-3f0bdd977328) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.097 286999 INFO neutron.agent.ovn.metadata.agent [-] Port d2dfa631-e553-46bc-bc20-3f0bdd977328 in datapath 85c8d446-ad7f-4d1b-a311-89b0b07e8aad unbound from our chassis#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.100 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85c8d446-ad7f-4d1b-a311-89b0b07e8aad#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.127 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[cce15794-468e-408a-b67b-8f1997b2cb60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:50:15 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec  3 18:50:15 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 36.085s CPU time.
Dec  3 18:50:15 compute-0 systemd-machined[138702]: Machine qemu-3-instance-00000003 terminated.
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.175 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ec2439-02cd-484d-a3fb-0f0408bd8fd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.178 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[7f50df61-ac57-4020-88cc-1bf9788f9bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.199 348329 DEBUG nova.compute.manager [req-7ffa0223-6d25-4fd4-a9a2-bc737efdf113 req-1065f6cc-d738-4a96-acdb-40c2d1c43ec7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Received event network-changed-d2dfa631-e553-46bc-bc20-3f0bdd977328 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.200 348329 DEBUG nova.compute.manager [req-7ffa0223-6d25-4fd4-a9a2-bc737efdf113 req-1065f6cc-d738-4a96-acdb-40c2d1c43ec7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Refreshing instance network info cache due to event network-changed-d2dfa631-e553-46bc-bc20-3f0bdd977328. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.201 348329 DEBUG oslo_concurrency.lockutils [req-7ffa0223-6d25-4fd4-a9a2-bc737efdf113 req-1065f6cc-d738-4a96-acdb-40c2d1c43ec7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.201 348329 DEBUG oslo_concurrency.lockutils [req-7ffa0223-6d25-4fd4-a9a2-bc737efdf113 req-1065f6cc-d738-4a96-acdb-40c2d1c43ec7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.202 348329 DEBUG nova.network.neutron [req-7ffa0223-6d25-4fd4-a9a2-bc737efdf113 req-1065f6cc-d738-4a96-acdb-40c2d1c43ec7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Refreshing network info cache for port d2dfa631-e553-46bc-bc20-3f0bdd977328 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.210 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[819c8303-69ac-47d2-bb40-318565b90043]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.231 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[53ffb1d4-a3bc-495d-9187-946c493542bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85c8d446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:c1:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 13, 'rx_bytes': 574, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 13, 'rx_bytes': 574, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527503, 'reachable_time': 35416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430421, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.249 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[069d5942-6cab-4b7f-8ac8-90cc785c97d0]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527519, 'tstamp': 527519}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430422, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527523, 'tstamp': 527523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430422, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.251 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85c8d446-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.253 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.260 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.260 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85c8d446-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.261 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.262 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85c8d446-a0, col_values=(('external_ids', {'iface-id': '4db8340d-afa3-4a82-bd51-bca0a752f53f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.262 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.330 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.340 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.347 348329 INFO nova.virt.libvirt.driver [-] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Instance destroyed successfully.#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.347 348329 DEBUG nova.objects.instance [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'resources' on Instance uuid de3992c5-c1ad-4da3-9276-954d6365c3c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.361 348329 DEBUG nova.virt.libvirt.vif [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-66btob3-t73jgstwyk5c-ol75pntdsuyz-vnf-noho2adux65j',id=3,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:42:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b322e118-e1cc-40be-8d8c-553648144092'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-lwasdd16',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:42:28Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00NzQxODkyODA1MjIxNzExOTYxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQ3NDE4OTI4MDUyMjE3MTE5NjE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDc0MTg5MjgwNTIyMTcxMTk2MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQ3NDE4OTI4MDUyMjE3MTE5NjE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00NzQxODkyODA1MjIxNzExOTYxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00NzQxODkyODA1MjIxNzExOTYxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  3 18:50:15 compute-0 nova_compute[348325]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDc0MTg5MjgwNTIyMTcxMTk2MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQ3NDE4OTI4MDUyMjE3MTE5NjE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00NzQxODkyODA1MjIxNzExOTYxPT0tLQo=',user_id='56338958b09445f5af9aa9e4601a1a8a',uuid=de3992c5-c1ad-4da3-9276-954d6365c3c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.361 348329 DEBUG nova.network.os_vif_util [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.362 348329 DEBUG nova.network.os_vif_util [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:73:73,bridge_name='br-int',has_traffic_filtering=True,id=d2dfa631-e553-46bc-bc20-3f0bdd977328,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd2dfa631-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.362 348329 DEBUG os_vif [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:73:73,bridge_name='br-int',has_traffic_filtering=True,id=d2dfa631-e553-46bc-bc20-3f0bdd977328,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd2dfa631-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.364 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.364 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2dfa631-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.366 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.368 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.371 348329 INFO os_vif [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:73:73,bridge_name='br-int',has_traffic_filtering=True,id=d2dfa631-e553-46bc-bc20-3f0bdd977328,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd2dfa631-e5')#033[00m
Dec  3 18:50:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.445 348329 DEBUG nova.compute.manager [req-28f0efd2-1180-43d7-aa34-8adf9dc6de0b req-2b1552d6-981f-468b-a9b6-316aefe58d1a 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Received event network-vif-unplugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.446 348329 DEBUG oslo_concurrency.lockutils [req-28f0efd2-1180-43d7-aa34-8adf9dc6de0b req-2b1552d6-981f-468b-a9b6-316aefe58d1a 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.446 348329 DEBUG oslo_concurrency.lockutils [req-28f0efd2-1180-43d7-aa34-8adf9dc6de0b req-2b1552d6-981f-468b-a9b6-316aefe58d1a 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.447 348329 DEBUG oslo_concurrency.lockutils [req-28f0efd2-1180-43d7-aa34-8adf9dc6de0b req-2b1552d6-981f-468b-a9b6-316aefe58d1a 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.447 348329 DEBUG nova.compute.manager [req-28f0efd2-1180-43d7-aa34-8adf9dc6de0b req-2b1552d6-981f-468b-a9b6-316aefe58d1a 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] No waiting events found dispatching network-vif-unplugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.448 348329 DEBUG nova.compute.manager [req-28f0efd2-1180-43d7-aa34-8adf9dc6de0b req-2b1552d6-981f-468b-a9b6-316aefe58d1a 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Received event network-vif-unplugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.479 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.479 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:15.480 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:50:15 compute-0 nova_compute[348325]: 2025-12-03 18:50:15.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:50:15 compute-0 rsyslogd[188590]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 18:50:15.361 348329 DEBUG nova.virt.libvirt.vif [None req-a7c1f3d7-ef [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 18:50:16 compute-0 nova_compute[348325]: 2025-12-03 18:50:16.368 348329 INFO nova.virt.libvirt.driver [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Deleting instance files /var/lib/nova/instances/de3992c5-c1ad-4da3-9276-954d6365c3c9_del#033[00m
Dec  3 18:50:16 compute-0 nova_compute[348325]: 2025-12-03 18:50:16.369 348329 INFO nova.virt.libvirt.driver [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Deletion of /var/lib/nova/instances/de3992c5-c1ad-4da3-9276-954d6365c3c9_del complete#033[00m
Dec  3 18:50:16 compute-0 nova_compute[348325]: 2025-12-03 18:50:16.441 348329 INFO nova.compute.manager [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Took 1.54 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 18:50:16 compute-0 nova_compute[348325]: 2025-12-03 18:50:16.442 348329 DEBUG oslo.service.loopingcall [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 18:50:16 compute-0 nova_compute[348325]: 2025-12-03 18:50:16.442 348329 DEBUG nova.compute.manager [-] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 18:50:16 compute-0 nova_compute[348325]: 2025-12-03 18:50:16.442 348329 DEBUG nova.network.neutron [-] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 18:50:16 compute-0 nova_compute[348325]: 2025-12-03 18:50:16.451 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:50:16 compute-0 nova_compute[348325]: 2025-12-03 18:50:16.452 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:50:16 compute-0 nova_compute[348325]: 2025-12-03 18:50:16.452 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:50:16 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:16.483 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:50:16 compute-0 podman[430453]: 2025-12-03 18:50:16.964213382 +0000 UTC m=+0.121655359 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:50:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 184 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 767 B/s wr, 10 op/s
Dec  3 18:50:17 compute-0 nova_compute[348325]: 2025-12-03 18:50:17.647 348329 DEBUG nova.compute.manager [req-e0f584e5-aff9-4213-bd65-a643d431da6e req-4db0922e-6b33-4131-9356-2050570464b1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Received event network-vif-plugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:50:17 compute-0 nova_compute[348325]: 2025-12-03 18:50:17.648 348329 DEBUG oslo_concurrency.lockutils [req-e0f584e5-aff9-4213-bd65-a643d431da6e req-4db0922e-6b33-4131-9356-2050570464b1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:50:17 compute-0 nova_compute[348325]: 2025-12-03 18:50:17.649 348329 DEBUG oslo_concurrency.lockutils [req-e0f584e5-aff9-4213-bd65-a643d431da6e req-4db0922e-6b33-4131-9356-2050570464b1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:50:17 compute-0 nova_compute[348325]: 2025-12-03 18:50:17.649 348329 DEBUG oslo_concurrency.lockutils [req-e0f584e5-aff9-4213-bd65-a643d431da6e req-4db0922e-6b33-4131-9356-2050570464b1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:50:17 compute-0 nova_compute[348325]: 2025-12-03 18:50:17.649 348329 DEBUG nova.compute.manager [req-e0f584e5-aff9-4213-bd65-a643d431da6e req-4db0922e-6b33-4131-9356-2050570464b1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] No waiting events found dispatching network-vif-plugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:50:17 compute-0 nova_compute[348325]: 2025-12-03 18:50:17.650 348329 WARNING nova.compute.manager [req-e0f584e5-aff9-4213-bd65-a643d431da6e req-4db0922e-6b33-4131-9356-2050570464b1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Received unexpected event network-vif-plugged-d2dfa631-e553-46bc-bc20-3f0bdd977328 for instance with vm_state active and task_state deleting.#033[00m
Dec  3 18:50:18 compute-0 nova_compute[348325]: 2025-12-03 18:50:18.004 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:18 compute-0 nova_compute[348325]: 2025-12-03 18:50:18.556 348329 DEBUG nova.network.neutron [req-7ffa0223-6d25-4fd4-a9a2-bc737efdf113 req-1065f6cc-d738-4a96-acdb-40c2d1c43ec7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Updated VIF entry in instance network info cache for port d2dfa631-e553-46bc-bc20-3f0bdd977328. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:50:18 compute-0 nova_compute[348325]: 2025-12-03 18:50:18.557 348329 DEBUG nova.network.neutron [req-7ffa0223-6d25-4fd4-a9a2-bc737efdf113 req-1065f6cc-d738-4a96-acdb-40c2d1c43ec7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Updating instance_info_cache with network_info: [{"id": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "address": "fa:16:3e:e6:73:73", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2dfa631-e5", "ovs_interfaceid": "d2dfa631-e553-46bc-bc20-3f0bdd977328", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:50:18 compute-0 nova_compute[348325]: 2025-12-03 18:50:18.585 348329 DEBUG oslo_concurrency.lockutils [req-7ffa0223-6d25-4fd4-a9a2-bc737efdf113 req-1065f6cc-d738-4a96-acdb-40c2d1c43ec7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-de3992c5-c1ad-4da3-9276-954d6365c3c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:50:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:18 compute-0 podman[430476]: 2025-12-03 18:50:18.956346852 +0000 UTC m=+0.107406321 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  3 18:50:19 compute-0 podman[430475]: 2025-12-03 18:50:19.04855807 +0000 UTC m=+0.196223695 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  3 18:50:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Dec  3 18:50:19 compute-0 nova_compute[348325]: 2025-12-03 18:50:19.534 348329 DEBUG nova.network.neutron [-] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:50:19 compute-0 nova_compute[348325]: 2025-12-03 18:50:19.565 348329 INFO nova.compute.manager [-] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Took 3.12 seconds to deallocate network for instance.#033[00m
Dec  3 18:50:19 compute-0 nova_compute[348325]: 2025-12-03 18:50:19.615 348329 DEBUG oslo_concurrency.lockutils [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:50:19 compute-0 nova_compute[348325]: 2025-12-03 18:50:19.616 348329 DEBUG oslo_concurrency.lockutils [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:50:19 compute-0 nova_compute[348325]: 2025-12-03 18:50:19.644 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Updating instance_info_cache with network_info: [{"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:50:19 compute-0 nova_compute[348325]: 2025-12-03 18:50:19.677 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:50:19 compute-0 nova_compute[348325]: 2025-12-03 18:50:19.677 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:50:19 compute-0 nova_compute[348325]: 2025-12-03 18:50:19.678 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:50:19 compute-0 nova_compute[348325]: 2025-12-03 18:50:19.678 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:50:19 compute-0 nova_compute[348325]: 2025-12-03 18:50:19.753 348329 DEBUG oslo_concurrency.processutils [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:50:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:50:20 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/175099115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:50:20 compute-0 nova_compute[348325]: 2025-12-03 18:50:20.169 348329 DEBUG oslo_concurrency.processutils [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:50:20 compute-0 nova_compute[348325]: 2025-12-03 18:50:20.176 348329 DEBUG nova.compute.provider_tree [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:50:20 compute-0 nova_compute[348325]: 2025-12-03 18:50:20.202 348329 DEBUG nova.scheduler.client.report [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:50:20 compute-0 nova_compute[348325]: 2025-12-03 18:50:20.233 348329 DEBUG oslo_concurrency.lockutils [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:50:20 compute-0 nova_compute[348325]: 2025-12-03 18:50:20.269 348329 INFO nova.scheduler.client.report [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Deleted allocations for instance de3992c5-c1ad-4da3-9276-954d6365c3c9#033[00m
Dec  3 18:50:20 compute-0 nova_compute[348325]: 2025-12-03 18:50:20.351 348329 DEBUG oslo_concurrency.lockutils [None req-a7c1f3d7-ef2f-42f1-b4fb-8e7972e719bc 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "de3992c5-c1ad-4da3-9276-954d6365c3c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:50:20 compute-0 nova_compute[348325]: 2025-12-03 18:50:20.366 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:50:22 compute-0 nova_compute[348325]: 2025-12-03 18:50:22.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:50:22 compute-0 nova_compute[348325]: 2025-12-03 18:50:22.508 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:50:22 compute-0 nova_compute[348325]: 2025-12-03 18:50:22.530 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:50:22 compute-0 nova_compute[348325]: 2025-12-03 18:50:22.531 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:50:22 compute-0 nova_compute[348325]: 2025-12-03 18:50:22.531 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:50:22 compute-0 nova_compute[348325]: 2025-12-03 18:50:22.531 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:50:22 compute-0 nova_compute[348325]: 2025-12-03 18:50:22.532 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:50:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:50:22 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1510559496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.005 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.022 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.261 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.262 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.262 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.270 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.270 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.270 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:50:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:23.346 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:50:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:23.347 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:50:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:50:23.348 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:50:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.654 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.655 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3648MB free_disk=59.92203140258789GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.655 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.655 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:50:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.759 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.760 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a6019a9c-c065-49d8-bef3-219bd2c79d8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.760 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.760 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:50:23 compute-0 nova_compute[348325]: 2025-12-03 18:50:23.832 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:50:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:50:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2653264486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:50:24 compute-0 nova_compute[348325]: 2025-12-03 18:50:24.331 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:50:24 compute-0 nova_compute[348325]: 2025-12-03 18:50:24.342 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:50:24 compute-0 nova_compute[348325]: 2025-12-03 18:50:24.367 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:50:24 compute-0 nova_compute[348325]: 2025-12-03 18:50:24.369 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:50:24 compute-0 nova_compute[348325]: 2025-12-03 18:50:24.369 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001104379822719281 of space, bias 1.0, pg target 0.33131394681578435 quantized to 32 (current 32)
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:50:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:50:25 compute-0 nova_compute[348325]: 2025-12-03 18:50:25.370 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:50:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:50:28 compute-0 nova_compute[348325]: 2025-12-03 18:50:28.006 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 29 op/s
Dec  3 18:50:29 compute-0 podman[158200]: time="2025-12-03T18:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:50:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:50:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec  3 18:50:30 compute-0 nova_compute[348325]: 2025-12-03 18:50:30.343 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764787815.342967, de3992c5-c1ad-4da3-9276-954d6365c3c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:50:30 compute-0 nova_compute[348325]: 2025-12-03 18:50:30.344 348329 INFO nova.compute.manager [-] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] VM Stopped (Lifecycle Event)#033[00m
Dec  3 18:50:30 compute-0 nova_compute[348325]: 2025-12-03 18:50:30.372 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:30 compute-0 nova_compute[348325]: 2025-12-03 18:50:30.848 348329 DEBUG nova.compute.manager [None req-c78e54e6-b377-489c-a4ca-bbb7c47da73f - - - - - -] [instance: de3992c5-c1ad-4da3-9276-954d6365c3c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:50:31 compute-0 openstack_network_exporter[365222]: ERROR   18:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:50:31 compute-0 openstack_network_exporter[365222]: ERROR   18:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:50:31 compute-0 openstack_network_exporter[365222]: ERROR   18:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:50:31 compute-0 openstack_network_exporter[365222]: ERROR   18:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:50:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:50:31 compute-0 openstack_network_exporter[365222]: ERROR   18:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:50:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:50:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Dec  3 18:50:33 compute-0 nova_compute[348325]: 2025-12-03 18:50:33.009 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:34 compute-0 podman[430588]: 2025-12-03 18:50:34.93075057 +0000 UTC m=+0.089337828 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:50:34 compute-0 podman[430589]: 2025-12-03 18:50:34.945021769 +0000 UTC m=+0.104834538 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:50:34 compute-0 podman[430590]: 2025-12-03 18:50:34.952318567 +0000 UTC m=+0.103793442 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6)
Dec  3 18:50:35 compute-0 nova_compute[348325]: 2025-12-03 18:50:35.374 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:36 compute-0 podman[430652]: 2025-12-03 18:50:36.918185224 +0000 UTC m=+0.081622039 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:50:36 compute-0 podman[430651]: 2025-12-03 18:50:36.941329701 +0000 UTC m=+0.108497827 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:50:36 compute-0 podman[430650]: 2025-12-03 18:50:36.957199549 +0000 UTC m=+0.124242752 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=kepler, com.redhat.component=ubi9-container)
Dec  3 18:50:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:50:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3260354037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:50:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:50:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3260354037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:50:38 compute-0 nova_compute[348325]: 2025-12-03 18:50:38.093 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:40 compute-0 nova_compute[348325]: 2025-12-03 18:50:40.376 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:40 compute-0 systemd-logind[784]: New session 62 of user zuul.
Dec  3 18:50:40 compute-0 systemd[1]: Started Session 62 of User zuul.
Dec  3 18:50:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:41 compute-0 python3[430883]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:50:43 compute-0 nova_compute[348325]: 2025-12-03 18:50:43.095 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:50:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:50:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:50:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:50:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:50:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:50:45 compute-0 nova_compute[348325]: 2025-12-03 18:50:45.378 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:50:47 compute-0 podman[430920]: 2025-12-03 18:50:47.969261869 +0000 UTC m=+0.121969587 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:50:48 compute-0 nova_compute[348325]: 2025-12-03 18:50:48.099 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 2 op/s
Dec  3 18:50:49 compute-0 podman[430945]: 2025-12-03 18:50:49.932278817 +0000 UTC m=+0.090303381 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  3 18:50:49 compute-0 podman[430944]: 2025-12-03 18:50:49.993382433 +0000 UTC m=+0.146384025 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Dec  3 18:50:50 compute-0 nova_compute[348325]: 2025-12-03 18:50:50.381 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 596 B/s wr, 4 op/s
Dec  3 18:50:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec  3 18:50:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec  3 18:50:51 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec  3 18:50:51 compute-0 ovn_controller[89305]: 2025-12-03T18:50:51Z|00057|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  3 18:50:53 compute-0 nova_compute[348325]: 2025-12-03 18:50:53.103 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 820 KiB/s wr, 15 op/s
Dec  3 18:50:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:55 compute-0 nova_compute[348325]: 2025-12-03 18:50:55.383 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec  3 18:50:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.070 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "c43f7e6f-80d9-491d-a394-ed3d8387e266" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.071 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "c43f7e6f-80d9-491d-a394-ed3d8387e266" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.089 348329 DEBUG nova.compute.manager [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.105 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.186 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.186 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.195 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.196 348329 INFO nova.compute.claims [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.357 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:50:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:50:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:50:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1278037024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.807 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.818 348329 DEBUG nova.compute.provider_tree [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.838 348329 DEBUG nova.scheduler.client.report [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.866 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.867 348329 DEBUG nova.compute.manager [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.935 348329 DEBUG nova.compute.manager [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.954 348329 INFO nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:50:58 compute-0 nova_compute[348325]: 2025-12-03 18:50:58.987 348329 DEBUG nova.compute.manager [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:50:59 compute-0 nova_compute[348325]: 2025-12-03 18:50:59.088 348329 DEBUG nova.compute.manager [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:50:59 compute-0 nova_compute[348325]: 2025-12-03 18:50:59.089 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:50:59 compute-0 nova_compute[348325]: 2025-12-03 18:50:59.090 348329 INFO nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Creating image(s)#033[00m
Dec  3 18:50:59 compute-0 nova_compute[348325]: 2025-12-03 18:50:59.121 348329 DEBUG nova.storage.rbd_utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image c43f7e6f-80d9-491d-a394-ed3d8387e266_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:50:59 compute-0 nova_compute[348325]: 2025-12-03 18:50:59.161 348329 DEBUG nova.storage.rbd_utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image c43f7e6f-80d9-491d-a394-ed3d8387e266_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:50:59 compute-0 nova_compute[348325]: 2025-12-03 18:50:59.198 348329 DEBUG nova.storage.rbd_utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image c43f7e6f-80d9-491d-a394-ed3d8387e266_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:50:59 compute-0 nova_compute[348325]: 2025-12-03 18:50:59.205 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "b31c907458f7ba86221dfe584fd8b9e7faaaf884" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:50:59 compute-0 nova_compute[348325]: 2025-12-03 18:50:59.206 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "b31c907458f7ba86221dfe584fd8b9e7faaaf884" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:50:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Dec  3 18:50:59 compute-0 nova_compute[348325]: 2025-12-03 18:50:59.566 348329 DEBUG nova.virt.libvirt.imagebackend [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image locations are: [{'url': 'rbd://c1caf3ba-b2a5-5005-a11e-e955c344dccc/images/7773c994-edaf-40a8-900c-d4cc47ee23ef/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c1caf3ba-b2a5-5005-a11e-e955c344dccc/images/7773c994-edaf-40a8-900c-d4cc47ee23ef/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  3 18:50:59 compute-0 podman[158200]: time="2025-12-03T18:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:50:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:50:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8643 "" "Go-http-client/1.1"
Dec  3 18:51:00 compute-0 nova_compute[348325]: 2025-12-03 18:51:00.418 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:00 compute-0 nova_compute[348325]: 2025-12-03 18:51:00.735 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:00 compute-0 nova_compute[348325]: 2025-12-03 18:51:00.794 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:00 compute-0 nova_compute[348325]: 2025-12-03 18:51:00.795 348329 DEBUG nova.virt.images [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] 7773c994-edaf-40a8-900c-d4cc47ee23ef was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  3 18:51:00 compute-0 nova_compute[348325]: 2025-12-03 18:51:00.796 348329 DEBUG nova.privsep.utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  3 18:51:00 compute-0 nova_compute[348325]: 2025-12-03 18:51:00.797 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884.part /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.018 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884.part /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884.converted" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.023 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.086 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884.converted --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.088 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "b31c907458f7ba86221dfe584fd8b9e7faaaf884" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.126 348329 DEBUG nova.storage.rbd_utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image c43f7e6f-80d9-491d-a394-ed3d8387e266_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.135 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884 c43f7e6f-80d9-491d-a394-ed3d8387e266_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:01 compute-0 openstack_network_exporter[365222]: ERROR   18:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:51:01 compute-0 openstack_network_exporter[365222]: ERROR   18:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:51:01 compute-0 openstack_network_exporter[365222]: ERROR   18:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:51:01 compute-0 openstack_network_exporter[365222]: ERROR   18:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:51:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:51:01 compute-0 openstack_network_exporter[365222]: ERROR   18:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:51:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:51:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.6 MiB/s wr, 12 op/s
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.483 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884 c43f7e6f-80d9-491d-a394-ed3d8387e266_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.606 348329 DEBUG nova.storage.rbd_utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] resizing rbd image c43f7e6f-80d9-491d-a394-ed3d8387e266_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:51:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:51:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:51:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:51:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:51:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:51:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.787 348329 DEBUG nova.objects.instance [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'migration_context' on Instance uuid c43f7e6f-80d9-491d-a394-ed3d8387e266 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:51:01 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 7186243d-5b11-4c96-8d20-c5587c90d3f0 does not exist
Dec  3 18:51:01 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev bf2c9504-fc90-4934-a7ae-677a61daa277 does not exist
Dec  3 18:51:01 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6d145892-3206-434e-86f7-82a1821d8df7 does not exist
Dec  3 18:51:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:51:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:51:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:51:01 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:51:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:51:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.834 348329 DEBUG nova.storage.rbd_utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.876 348329 DEBUG nova.storage.rbd_utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.884 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.944 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.945 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.946 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.947 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.983 348329 DEBUG nova.storage.rbd_utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:51:01 compute-0 nova_compute[348325]: 2025-12-03 18:51:01.993 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.428 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:02 compute-0 podman[431540]: 2025-12-03 18:51:02.48454933 +0000 UTC m=+0.049673807 container create d59e06cd6d41878bb59a25db8369afae028eb4fad5c7a9c422ccb6eb6f19df12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:51:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:51:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:51:02 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:51:02 compute-0 systemd[1]: Started libpod-conmon-d59e06cd6d41878bb59a25db8369afae028eb4fad5c7a9c422ccb6eb6f19df12.scope.
Dec  3 18:51:02 compute-0 podman[431540]: 2025-12-03 18:51:02.464877958 +0000 UTC m=+0.030002455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:51:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.578 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.579 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Ensure instance console log exists: /var/lib/nova/instances/c43f7e6f-80d9-491d-a394-ed3d8387e266/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.579 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.580 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.580 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.582 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T18:50:46Z,direct_url=<?>,disk_format='qcow2',id=7773c994-edaf-40a8-900c-d4cc47ee23ef,min_disk=0,min_ram=0,name='fvt_testing_image',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T18:50:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '7773c994-edaf-40a8-900c-d4cc47ee23ef'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:51:02 compute-0 podman[431540]: 2025-12-03 18:51:02.58339611 +0000 UTC m=+0.148520607 container init d59e06cd6d41878bb59a25db8369afae028eb4fad5c7a9c422ccb6eb6f19df12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.589 348329 WARNING nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:51:02 compute-0 podman[431540]: 2025-12-03 18:51:02.59240846 +0000 UTC m=+0.157532927 container start d59e06cd6d41878bb59a25db8369afae028eb4fad5c7a9c422ccb6eb6f19df12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 18:51:02 compute-0 podman[431540]: 2025-12-03 18:51:02.596189363 +0000 UTC m=+0.161313930 container attach d59e06cd6d41878bb59a25db8369afae028eb4fad5c7a9c422ccb6eb6f19df12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.597 348329 DEBUG nova.virt.libvirt.host [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:51:02 compute-0 friendly_agnesi[431599]: 167 167
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.598 348329 DEBUG nova.virt.libvirt.host [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:51:02 compute-0 systemd[1]: libpod-d59e06cd6d41878bb59a25db8369afae028eb4fad5c7a9c422ccb6eb6f19df12.scope: Deactivated successfully.
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.606 348329 DEBUG nova.virt.libvirt.host [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.606 348329 DEBUG nova.virt.libvirt.host [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.607 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.607 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:50:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='7d9da478-95c0-4f4c-b69e-110a26f3b5dc',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T18:50:46Z,direct_url=<?>,disk_format='qcow2',id=7773c994-edaf-40a8-900c-d4cc47ee23ef,min_disk=0,min_ram=0,name='fvt_testing_image',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T18:50:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.608 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.608 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.608 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.609 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.609 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.610 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.610 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.610 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.611 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.611 348329 DEBUG nova.virt.hardware [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:51:02 compute-0 nova_compute[348325]: 2025-12-03 18:51:02.614 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:02 compute-0 podman[431615]: 2025-12-03 18:51:02.64508901 +0000 UTC m=+0.032408884 container died d59e06cd6d41878bb59a25db8369afae028eb4fad5c7a9c422ccb6eb6f19df12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 18:51:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-01616142904a6330be1bc2f4907092897e17785b124739ba64bb51fee91f68ae-merged.mount: Deactivated successfully.
Dec  3 18:51:02 compute-0 podman[431615]: 2025-12-03 18:51:02.693780012 +0000 UTC m=+0.081099876 container remove d59e06cd6d41878bb59a25db8369afae028eb4fad5c7a9c422ccb6eb6f19df12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:51:02 compute-0 systemd[1]: libpod-conmon-d59e06cd6d41878bb59a25db8369afae028eb4fad5c7a9c422ccb6eb6f19df12.scope: Deactivated successfully.
Dec  3 18:51:02 compute-0 podman[431656]: 2025-12-03 18:51:02.927160325 +0000 UTC m=+0.059126508 container create ee8661f505e0deb325007d06956b97ca03ee519a2104867f5aca8719daec31a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bouman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 18:51:02 compute-0 systemd[1]: Started libpod-conmon-ee8661f505e0deb325007d06956b97ca03ee519a2104867f5aca8719daec31a6.scope.
Dec  3 18:51:02 compute-0 podman[431656]: 2025-12-03 18:51:02.905793472 +0000 UTC m=+0.037759675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:51:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/788598b826274c22fc70747b58ef491f1bb31acc0ed32b511bb94cff6b2369b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/788598b826274c22fc70747b58ef491f1bb31acc0ed32b511bb94cff6b2369b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/788598b826274c22fc70747b58ef491f1bb31acc0ed32b511bb94cff6b2369b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/788598b826274c22fc70747b58ef491f1bb31acc0ed32b511bb94cff6b2369b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/788598b826274c22fc70747b58ef491f1bb31acc0ed32b511bb94cff6b2369b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:03 compute-0 podman[431656]: 2025-12-03 18:51:03.037375884 +0000 UTC m=+0.169342067 container init ee8661f505e0deb325007d06956b97ca03ee519a2104867f5aca8719daec31a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bouman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:51:03 compute-0 podman[431656]: 2025-12-03 18:51:03.060105109 +0000 UTC m=+0.192071312 container start ee8661f505e0deb325007d06956b97ca03ee519a2104867f5aca8719daec31a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:51:03 compute-0 podman[431656]: 2025-12-03 18:51:03.067223794 +0000 UTC m=+0.199190007 container attach ee8661f505e0deb325007d06956b97ca03ee519a2104867f5aca8719daec31a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:51:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:51:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/753717919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:51:03 compute-0 nova_compute[348325]: 2025-12-03 18:51:03.104 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:03 compute-0 nova_compute[348325]: 2025-12-03 18:51:03.107 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:03 compute-0 nova_compute[348325]: 2025-12-03 18:51:03.124 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 163 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 718 KiB/s rd, 1.5 MiB/s wr, 34 op/s
Dec  3 18:51:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:51:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1088170053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:51:03 compute-0 nova_compute[348325]: 2025-12-03 18:51:03.586 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:03 compute-0 nova_compute[348325]: 2025-12-03 18:51:03.620 348329 DEBUG nova.storage.rbd_utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:51:03 compute-0 nova_compute[348325]: 2025-12-03 18:51:03.628 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:51:04 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/284132979' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.130 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.135 348329 DEBUG nova.objects.instance [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'pci_devices' on Instance uuid c43f7e6f-80d9-491d-a394-ed3d8387e266 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.156 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:51:04 compute-0 nova_compute[348325]:  <uuid>c43f7e6f-80d9-491d-a394-ed3d8387e266</uuid>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  <name>instance-00000005</name>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  <memory>524288</memory>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <nova:name>fvt_testing_server</nova:name>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:51:02</nova:creationTime>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <nova:flavor name="fvt_testing_flavor">
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <nova:memory>512</nova:memory>
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <nova:ephemeral>1</nova:ephemeral>
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <nova:user uuid="56338958b09445f5af9aa9e4601a1a8a">admin</nova:user>
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <nova:project uuid="d2770200bdb2436c90142fa2e5ddcd47">admin</nova:project>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="7773c994-edaf-40a8-900c-d4cc47ee23ef"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <nova:ports/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <system>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <entry name="serial">c43f7e6f-80d9-491d-a394-ed3d8387e266</entry>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <entry name="uuid">c43f7e6f-80d9-491d-a394-ed3d8387e266</entry>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    </system>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  <os>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  </os>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  <features>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  </features>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/c43f7e6f-80d9-491d-a394-ed3d8387e266_disk">
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      </source>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.eph0">
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      </source>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <target dev="vdb" bus="virtio"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.config">
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      </source>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:51:04 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/c43f7e6f-80d9-491d-a394-ed3d8387e266/console.log" append="off"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <video>
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    </video>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:51:04 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:51:04 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:51:04 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:51:04 compute-0 nova_compute[348325]: </domain>
Dec  3 18:51:04 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:51:04 compute-0 youthful_bouman[431672]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:51:04 compute-0 youthful_bouman[431672]: --> relative data size: 1.0
Dec  3 18:51:04 compute-0 youthful_bouman[431672]: --> All data devices are unavailable
Dec  3 18:51:04 compute-0 systemd[1]: libpod-ee8661f505e0deb325007d06956b97ca03ee519a2104867f5aca8719daec31a6.scope: Deactivated successfully.
Dec  3 18:51:04 compute-0 systemd[1]: libpod-ee8661f505e0deb325007d06956b97ca03ee519a2104867f5aca8719daec31a6.scope: Consumed 1.108s CPU time.
Dec  3 18:51:04 compute-0 podman[431656]: 2025-12-03 18:51:04.223579313 +0000 UTC m=+1.355545506 container died ee8661f505e0deb325007d06956b97ca03ee519a2104867f5aca8719daec31a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bouman, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.234 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.235 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.236 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.236 348329 INFO nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Using config drive#033[00m
Dec  3 18:51:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-788598b826274c22fc70747b58ef491f1bb31acc0ed32b511bb94cff6b2369b4-merged.mount: Deactivated successfully.
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.286 348329 DEBUG nova.storage.rbd_utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:51:04 compute-0 podman[431656]: 2025-12-03 18:51:04.296305203 +0000 UTC m=+1.428271386 container remove ee8661f505e0deb325007d06956b97ca03ee519a2104867f5aca8719daec31a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:51:04 compute-0 systemd[1]: libpod-conmon-ee8661f505e0deb325007d06956b97ca03ee519a2104867f5aca8719daec31a6.scope: Deactivated successfully.
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.685 348329 INFO nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Creating config drive at /var/lib/nova/instances/c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.config#033[00m
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.691 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqvxc9ahr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.816 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqvxc9ahr" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.855 348329 DEBUG nova.storage.rbd_utils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] rbd image c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:51:04 compute-0 nova_compute[348325]: 2025-12-03 18:51:04.863 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.config c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:05 compute-0 nova_compute[348325]: 2025-12-03 18:51:05.089 348329 DEBUG oslo_concurrency.processutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.config c43f7e6f-80d9-491d-a394-ed3d8387e266_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:05 compute-0 nova_compute[348325]: 2025-12-03 18:51:05.090 348329 INFO nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Deleting local config drive /var/lib/nova/instances/c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.config because it was imported into RBD.#033[00m
Dec  3 18:51:05 compute-0 podman[431974]: 2025-12-03 18:51:05.11682064 +0000 UTC m=+0.060565214 container create 9c2170baf66c77e01d56dd18dce5eb73d1feb95be861904f50c20c356be08de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:51:05 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 18:51:05 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 18:51:05 compute-0 systemd[1]: Started libpod-conmon-9c2170baf66c77e01d56dd18dce5eb73d1feb95be861904f50c20c356be08de0.scope.
Dec  3 18:51:05 compute-0 podman[431974]: 2025-12-03 18:51:05.093736315 +0000 UTC m=+0.037480909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:51:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:51:05 compute-0 systemd-machined[138702]: New machine qemu-5-instance-00000005.
Dec  3 18:51:05 compute-0 podman[431989]: 2025-12-03 18:51:05.236245454 +0000 UTC m=+0.106795326 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7)
Dec  3 18:51:05 compute-0 podman[431988]: 2025-12-03 18:51:05.236731246 +0000 UTC m=+0.107274498 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:51:05 compute-0 podman[431974]: 2025-12-03 18:51:05.236444759 +0000 UTC m=+0.180189363 container init 9c2170baf66c77e01d56dd18dce5eb73d1feb95be861904f50c20c356be08de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:51:05 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec  3 18:51:05 compute-0 podman[431974]: 2025-12-03 18:51:05.246765021 +0000 UTC m=+0.190509595 container start 9c2170baf66c77e01d56dd18dce5eb73d1feb95be861904f50c20c356be08de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 18:51:05 compute-0 podman[431987]: 2025-12-03 18:51:05.241321408 +0000 UTC m=+0.111471159 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:51:05 compute-0 podman[431974]: 2025-12-03 18:51:05.25160255 +0000 UTC m=+0.195347184 container attach 9c2170baf66c77e01d56dd18dce5eb73d1feb95be861904f50c20c356be08de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:51:05 compute-0 zen_hopper[432035]: 167 167
Dec  3 18:51:05 compute-0 systemd[1]: libpod-9c2170baf66c77e01d56dd18dce5eb73d1feb95be861904f50c20c356be08de0.scope: Deactivated successfully.
Dec  3 18:51:05 compute-0 podman[431974]: 2025-12-03 18:51:05.253758403 +0000 UTC m=+0.197502987 container died 9c2170baf66c77e01d56dd18dce5eb73d1feb95be861904f50c20c356be08de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d7b569463e0263955386621c6d2eb039552a20cb717157f16d6968ef52a3a0b-merged.mount: Deactivated successfully.
Dec  3 18:51:05 compute-0 podman[431974]: 2025-12-03 18:51:05.29614374 +0000 UTC m=+0.239888314 container remove 9c2170baf66c77e01d56dd18dce5eb73d1feb95be861904f50c20c356be08de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:51:05 compute-0 systemd[1]: libpod-conmon-9c2170baf66c77e01d56dd18dce5eb73d1feb95be861904f50c20c356be08de0.scope: Deactivated successfully.
Dec  3 18:51:05 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  3 18:51:05 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  3 18:51:05 compute-0 nova_compute[348325]: 2025-12-03 18:51:05.420 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 171 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 44 op/s
Dec  3 18:51:05 compute-0 podman[432126]: 2025-12-03 18:51:05.482053701 +0000 UTC m=+0.049824530 container create 2066a288514a02509d3d1e6acd1f5de993f3dab5cf41df7cb3e375123030a188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 18:51:05 compute-0 systemd[1]: Started libpod-conmon-2066a288514a02509d3d1e6acd1f5de993f3dab5cf41df7cb3e375123030a188.scope.
Dec  3 18:51:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:51:05 compute-0 podman[432126]: 2025-12-03 18:51:05.462729218 +0000 UTC m=+0.030500067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b012904f9915505b13586640226d710184b7e1831ff66fdf687e314c343ac9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b012904f9915505b13586640226d710184b7e1831ff66fdf687e314c343ac9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b012904f9915505b13586640226d710184b7e1831ff66fdf687e314c343ac9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b012904f9915505b13586640226d710184b7e1831ff66fdf687e314c343ac9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:05 compute-0 podman[432126]: 2025-12-03 18:51:05.614840762 +0000 UTC m=+0.182611641 container init 2066a288514a02509d3d1e6acd1f5de993f3dab5cf41df7cb3e375123030a188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 18:51:05 compute-0 podman[432126]: 2025-12-03 18:51:05.626423696 +0000 UTC m=+0.194194525 container start 2066a288514a02509d3d1e6acd1f5de993f3dab5cf41df7cb3e375123030a188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:51:05 compute-0 podman[432126]: 2025-12-03 18:51:05.630665279 +0000 UTC m=+0.198436198 container attach 2066a288514a02509d3d1e6acd1f5de993f3dab5cf41df7cb3e375123030a188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.247 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764787866.2463746, c43f7e6f-80d9-491d-a394-ed3d8387e266 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.248 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.253 348329 DEBUG nova.compute.manager [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.254 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.261 348329 INFO nova.virt.libvirt.driver [-] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Instance spawned successfully.#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.261 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.299 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.308 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.309 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.309 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.311 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.311 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.312 348329 DEBUG nova.virt.libvirt.driver [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.318 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.371 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.371 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764787866.248001, c43f7e6f-80d9-491d-a394-ed3d8387e266 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.372 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] VM Started (Lifecycle Event)#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.401 348329 INFO nova.compute.manager [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Took 7.31 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.401 348329 DEBUG nova.compute.manager [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.403 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.416 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]: {
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:    "0": [
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:        {
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "devices": [
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "/dev/loop3"
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            ],
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_name": "ceph_lv0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_size": "21470642176",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "name": "ceph_lv0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "tags": {
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.cluster_name": "ceph",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.crush_device_class": "",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.encrypted": "0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.osd_id": "0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.type": "block",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.vdo": "0"
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            },
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "type": "block",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "vg_name": "ceph_vg0"
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:        }
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:    ],
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:    "1": [
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:        {
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "devices": [
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "/dev/loop4"
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            ],
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_name": "ceph_lv1",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_size": "21470642176",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "name": "ceph_lv1",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "tags": {
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.cluster_name": "ceph",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.crush_device_class": "",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.encrypted": "0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.osd_id": "1",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.type": "block",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.vdo": "0"
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            },
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "type": "block",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "vg_name": "ceph_vg1"
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:        }
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:    ],
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:    "2": [
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:        {
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "devices": [
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "/dev/loop5"
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            ],
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_name": "ceph_lv2",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_size": "21470642176",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "name": "ceph_lv2",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "tags": {
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.cluster_name": "ceph",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.crush_device_class": "",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.encrypted": "0",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.osd_id": "2",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.type": "block",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:                "ceph.vdo": "0"
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            },
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "type": "block",
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:            "vg_name": "ceph_vg2"
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:        }
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]:    ]
Dec  3 18:51:06 compute-0 condescending_mcnulty[432142]: }
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.457 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.477 348329 INFO nova.compute.manager [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Took 8.33 seconds to build instance.#033[00m
Dec  3 18:51:06 compute-0 systemd[1]: libpod-2066a288514a02509d3d1e6acd1f5de993f3dab5cf41df7cb3e375123030a188.scope: Deactivated successfully.
Dec  3 18:51:06 compute-0 podman[432126]: 2025-12-03 18:51:06.494313532 +0000 UTC m=+1.062084371 container died 2066a288514a02509d3d1e6acd1f5de993f3dab5cf41df7cb3e375123030a188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:51:06 compute-0 nova_compute[348325]: 2025-12-03 18:51:06.492 348329 DEBUG oslo_concurrency.lockutils [None req-7b4704aa-0c7e-4366-80a6-d10bd00acc67 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "c43f7e6f-80d9-491d-a394-ed3d8387e266" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:51:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-74b012904f9915505b13586640226d710184b7e1831ff66fdf687e314c343ac9-merged.mount: Deactivated successfully.
Dec  3 18:51:06 compute-0 podman[432126]: 2025-12-03 18:51:06.606026747 +0000 UTC m=+1.173797576 container remove 2066a288514a02509d3d1e6acd1f5de993f3dab5cf41df7cb3e375123030a188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 18:51:06 compute-0 systemd[1]: libpod-conmon-2066a288514a02509d3d1e6acd1f5de993f3dab5cf41df7cb3e375123030a188.scope: Deactivated successfully.
Dec  3 18:51:07 compute-0 podman[432324]: 2025-12-03 18:51:07.103185508 +0000 UTC m=+0.086288353 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  3 18:51:07 compute-0 podman[432322]: 2025-12-03 18:51:07.107651377 +0000 UTC m=+0.092306930 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 18:51:07 compute-0 podman[432323]: 2025-12-03 18:51:07.108068828 +0000 UTC m=+0.094238738 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:51:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 45 op/s
Dec  3 18:51:07 compute-0 podman[432416]: 2025-12-03 18:51:07.432869529 +0000 UTC m=+0.033754737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:51:07 compute-0 podman[432416]: 2025-12-03 18:51:07.562184785 +0000 UTC m=+0.163069963 container create a7f121a80c87eb8f77dea5057faa4b301224bc4b8e84fe9e4bd3abd0131bb689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_grothendieck, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:51:07 compute-0 systemd[1]: Started libpod-conmon-a7f121a80c87eb8f77dea5057faa4b301224bc4b8e84fe9e4bd3abd0131bb689.scope.
Dec  3 18:51:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:51:07 compute-0 podman[432416]: 2025-12-03 18:51:07.708562978 +0000 UTC m=+0.309448166 container init a7f121a80c87eb8f77dea5057faa4b301224bc4b8e84fe9e4bd3abd0131bb689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:51:07 compute-0 podman[432416]: 2025-12-03 18:51:07.728814444 +0000 UTC m=+0.329699622 container start a7f121a80c87eb8f77dea5057faa4b301224bc4b8e84fe9e4bd3abd0131bb689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_grothendieck, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:51:07 compute-0 podman[432416]: 2025-12-03 18:51:07.732820253 +0000 UTC m=+0.333705431 container attach a7f121a80c87eb8f77dea5057faa4b301224bc4b8e84fe9e4bd3abd0131bb689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:51:07 compute-0 youthful_grothendieck[432432]: 167 167
Dec  3 18:51:07 compute-0 systemd[1]: libpod-a7f121a80c87eb8f77dea5057faa4b301224bc4b8e84fe9e4bd3abd0131bb689.scope: Deactivated successfully.
Dec  3 18:51:07 compute-0 conmon[432432]: conmon a7f121a80c87eb8f77de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7f121a80c87eb8f77dea5057faa4b301224bc4b8e84fe9e4bd3abd0131bb689.scope/container/memory.events
Dec  3 18:51:07 compute-0 podman[432416]: 2025-12-03 18:51:07.74291939 +0000 UTC m=+0.343804568 container died a7f121a80c87eb8f77dea5057faa4b301224bc4b8e84fe9e4bd3abd0131bb689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bc682241ac0149d63cf92dfdc113d62500807f97c75951bf38f8de105128eb0-merged.mount: Deactivated successfully.
Dec  3 18:51:07 compute-0 podman[432416]: 2025-12-03 18:51:07.836039189 +0000 UTC m=+0.436924367 container remove a7f121a80c87eb8f77dea5057faa4b301224bc4b8e84fe9e4bd3abd0131bb689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 18:51:07 compute-0 systemd[1]: libpod-conmon-a7f121a80c87eb8f77dea5057faa4b301224bc4b8e84fe9e4bd3abd0131bb689.scope: Deactivated successfully.
Dec  3 18:51:08 compute-0 podman[432456]: 2025-12-03 18:51:08.070797026 +0000 UTC m=+0.072431974 container create d3bb95097aadbca03a65f680099865dc70465b54ee8ed7f0cd2ea6fc6a6ece4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:51:08 compute-0 nova_compute[348325]: 2025-12-03 18:51:08.110 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:08 compute-0 podman[432456]: 2025-12-03 18:51:08.04806497 +0000 UTC m=+0.049699938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:51:08 compute-0 systemd[1]: Started libpod-conmon-d3bb95097aadbca03a65f680099865dc70465b54ee8ed7f0cd2ea6fc6a6ece4e.scope.
Dec  3 18:51:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88b2ca0953ef98fe310bcc3c23b0fa0ced6df005b69a8016694132afe24feaab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88b2ca0953ef98fe310bcc3c23b0fa0ced6df005b69a8016694132afe24feaab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88b2ca0953ef98fe310bcc3c23b0fa0ced6df005b69a8016694132afe24feaab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88b2ca0953ef98fe310bcc3c23b0fa0ced6df005b69a8016694132afe24feaab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:51:08 compute-0 podman[432456]: 2025-12-03 18:51:08.220114422 +0000 UTC m=+0.221749380 container init d3bb95097aadbca03a65f680099865dc70465b54ee8ed7f0cd2ea6fc6a6ece4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:51:08 compute-0 podman[432456]: 2025-12-03 18:51:08.241486575 +0000 UTC m=+0.243121513 container start d3bb95097aadbca03a65f680099865dc70465b54ee8ed7f0cd2ea6fc6a6ece4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:51:08 compute-0 podman[432456]: 2025-12-03 18:51:08.246132208 +0000 UTC m=+0.247767146 container attach d3bb95097aadbca03a65f680099865dc70465b54ee8ed7f0cd2ea6fc6a6ece4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 18:51:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:09 compute-0 nova_compute[348325]: 2025-12-03 18:51:09.361 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:51:09 compute-0 vigilant_jang[432472]: {
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "osd_id": 1,
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "type": "bluestore"
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:    },
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "osd_id": 2,
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "type": "bluestore"
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:    },
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "osd_id": 0,
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:        "type": "bluestore"
Dec  3 18:51:09 compute-0 vigilant_jang[432472]:    }
Dec  3 18:51:09 compute-0 vigilant_jang[432472]: }
Dec  3 18:51:09 compute-0 systemd[1]: libpod-d3bb95097aadbca03a65f680099865dc70465b54ee8ed7f0cd2ea6fc6a6ece4e.scope: Deactivated successfully.
Dec  3 18:51:09 compute-0 systemd[1]: libpod-d3bb95097aadbca03a65f680099865dc70465b54ee8ed7f0cd2ea6fc6a6ece4e.scope: Consumed 1.100s CPU time.
Dec  3 18:51:09 compute-0 podman[432456]: 2025-12-03 18:51:09.409824167 +0000 UTC m=+1.411459115 container died d3bb95097aadbca03a65f680099865dc70465b54ee8ed7f0cd2ea6fc6a6ece4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-88b2ca0953ef98fe310bcc3c23b0fa0ced6df005b69a8016694132afe24feaab-merged.mount: Deactivated successfully.
Dec  3 18:51:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 51 op/s
Dec  3 18:51:09 compute-0 podman[432456]: 2025-12-03 18:51:09.500264071 +0000 UTC m=+1.501899009 container remove d3bb95097aadbca03a65f680099865dc70465b54ee8ed7f0cd2ea6fc6a6ece4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:51:09 compute-0 systemd[1]: libpod-conmon-d3bb95097aadbca03a65f680099865dc70465b54ee8ed7f0cd2ea6fc6a6ece4e.scope: Deactivated successfully.
Dec  3 18:51:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:51:09 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:51:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:51:09 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:51:09 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 24271924-aede-4638-861e-6d5aab28c316 does not exist
Dec  3 18:51:09 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 7cb2382d-b539-476a-890c-f3f45852c20a does not exist
Dec  3 18:51:10 compute-0 nova_compute[348325]: 2025-12-03 18:51:10.421 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:10 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:51:10 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:51:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.4 MiB/s wr, 99 op/s
Dec  3 18:51:13 compute-0 nova_compute[348325]: 2025-12-03 18:51:13.113 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.250 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.251 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.251 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.261 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1ca1fbdb-089c-4544-821e-0542089b8424', 'name': 'test_0', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.264 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance c43f7e6f-80d9-491d-a394-ed3d8387e266 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 18:51:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:13.266 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/c43f7e6f-80d9-491d-a394-ed3d8387e266 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 104 op/s
Dec  3 18:51:13 compute-0 nova_compute[348325]: 2025-12-03 18:51:13.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:51:13 compute-0 nova_compute[348325]: 2025-12-03 18:51:13.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:51:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:51:13
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', '.mgr', '.rgw.root']
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:51:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.118 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Wed, 03 Dec 2025 18:51:13 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-60fb1d31-8cf9-4347-a188-9b27c4b8cb48 x-openstack-request-id: req-60fb1d31-8cf9-4347-a188-9b27c4b8cb48 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.119 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "c43f7e6f-80d9-491d-a394-ed3d8387e266", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "user_id": "56338958b09445f5af9aa9e4601a1a8a", "metadata": {}, "hostId": "233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878", "image": {"id": "7773c994-edaf-40a8-900c-d4cc47ee23ef", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7773c994-edaf-40a8-900c-d4cc47ee23ef"}]}, "flavor": {"id": "7d9da478-95c0-4f4c-b69e-110a26f3b5dc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7d9da478-95c0-4f4c-b69e-110a26f3b5dc"}]}, "created": "2025-12-03T18:50:57Z", "updated": "2025-12-03T18:51:06Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/c43f7e6f-80d9-491d-a394-ed3d8387e266"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/c43f7e6f-80d9-491d-a394-ed3d8387e266"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T18:51:06.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.119 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/c43f7e6f-80d9-491d-a394-ed3d8387e266 used request id req-60fb1d31-8cf9-4347-a188-9b27c4b8cb48 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.120 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c43f7e6f-80d9-491d-a394-ed3d8387e266', 'name': 'fvt_testing_server', 'flavor': {'id': '7d9da478-95c0-4f4c-b69e-110a26f3b5dc', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7773c994-edaf-40a8-900c-d4cc47ee23ef'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.124 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6019a9c-c065-49d8-bef3-219bd2c79d8c', 'name': 'vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.124 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T18:51:14.125799) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.131 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.139 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.140 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.140 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.140 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.140 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.140 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.141 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T18:51:14.140930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.141 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.141 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.142 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.142 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.142 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.142 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.142 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.143 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T18:51:14.142893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.143 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.143 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.144 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.144 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.144 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.144 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.145 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.145 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T18:51:14.144854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.145 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.145 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.146 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.146 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.146 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T18:51:14.146831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.147 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.147 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.148 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.148 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T18:51:14.148509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.149 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.149 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.150 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T18:51:14.150513) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.173 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.174 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.175 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.205 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.206 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.207 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.231 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.232 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.232 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.234 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.234 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.234 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.235 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T18:51:14.235305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.235 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.236 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.237 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.237 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.238 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.238 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.238 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.238 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T18:51:14.238985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.239 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.267 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.293 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.293 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance c43f7e6f-80d9-491d-a394-ed3d8387e266: ceilometer.compute.pollsters.NoVolumeException
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.327 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.328 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.329 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T18:51:14.328396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.329 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.330 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.331 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T18:51:14.330820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.331 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.332 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.332 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.332 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.333 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.333 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.334 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.334 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.335 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T18:51:14.335580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.398 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.399 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.400 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:51:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:51:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:51:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:51:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:51:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:51:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:51:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:51:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:51:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.484 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.484 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.484 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 nova_compute[348325]: 2025-12-03 18:51:14.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:51:14 compute-0 nova_compute[348325]: 2025-12-03 18:51:14.489 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:51:14 compute-0 nova_compute[348325]: 2025-12-03 18:51:14.491 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.553 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.554 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.554 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.555 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.555 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.556 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.556 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T18:51:14.555746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.556 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.556 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T18:51:14.557295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.557 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 1682579508 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.558 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 260360075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.558 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 147233249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.558 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.read.latency volume: 1150679264 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.559 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.559 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.read.latency volume: 3245979 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.559 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 1330892351 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.559 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 190600353 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.560 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 156629474 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.560 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T18:51:14.561264) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.561 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.562 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.562 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.562 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.562 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.563 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.563 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.563 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.563 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.564 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.564 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T18:51:14.565097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.565 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.565 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.566 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.566 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.566 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.566 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.567 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.567 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.567 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T18:51:14.568948) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.569 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.569 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.570 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.570 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.570 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.570 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.571 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.571 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.571 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.572 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T18:51:14.572769) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.573 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.573 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.573 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.574 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T18:51:14.574967) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.575 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 6303799002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.575 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 23959545 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.576 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.576 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.576 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.576 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.577 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 6164702929 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.577 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 24431067 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.577 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.578 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.579 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T18:51:14.578862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.579 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.579 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.580 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.580 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.580 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.581 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.581 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.581 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.582 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T18:51:14.583039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.583 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.583 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.584 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T18:51:14.584960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.586 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T18:51:14.586683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.587 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.587 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.588 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T18:51:14.588928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.590 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T18:51:14.590851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.591 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.591 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.592 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T18:51:14.592816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.593 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/cpu volume: 45840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.593 14 DEBUG ceilometer.compute.pollsters [-] c43f7e6f-80d9-491d-a394-ed3d8387e266/cpu volume: 7730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.593 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/cpu volume: 41830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:51:14.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:51:15 compute-0 nova_compute[348325]: 2025-12-03 18:51:15.424 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 80 op/s
Dec  3 18:51:16 compute-0 nova_compute[348325]: 2025-12-03 18:51:16.491 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:51:16 compute-0 nova_compute[348325]: 2025-12-03 18:51:16.492 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:51:16 compute-0 nova_compute[348325]: 2025-12-03 18:51:16.493 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:51:17 compute-0 nova_compute[348325]: 2025-12-03 18:51:17.395 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:51:17 compute-0 nova_compute[348325]: 2025-12-03 18:51:17.396 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:51:17 compute-0 nova_compute[348325]: 2025-12-03 18:51:17.396 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:51:17 compute-0 nova_compute[348325]: 2025-12-03 18:51:17.396 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ca1fbdb-089c-4544-821e-0542089b8424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:51:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 639 KiB/s wr, 61 op/s
Dec  3 18:51:18 compute-0 nova_compute[348325]: 2025-12-03 18:51:18.115 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:18 compute-0 podman[432568]: 2025-12-03 18:51:18.94010092 +0000 UTC m=+0.098904103 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:51:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 58 op/s
Dec  3 18:51:19 compute-0 nova_compute[348325]: 2025-12-03 18:51:19.735 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:51:19 compute-0 nova_compute[348325]: 2025-12-03 18:51:19.776 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:51:19 compute-0 nova_compute[348325]: 2025-12-03 18:51:19.777 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:51:19 compute-0 nova_compute[348325]: 2025-12-03 18:51:19.778 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:51:19 compute-0 nova_compute[348325]: 2025-12-03 18:51:19.779 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:51:20 compute-0 nova_compute[348325]: 2025-12-03 18:51:20.427 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:20 compute-0 podman[432593]: 2025-12-03 18:51:20.911134283 +0000 UTC m=+0.077901849 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:51:20 compute-0 podman[432592]: 2025-12-03 18:51:20.966899918 +0000 UTC m=+0.131713645 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec  3 18:51:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 255 B/s wr, 53 op/s
Dec  3 18:51:22 compute-0 nova_compute[348325]: 2025-12-03 18:51:22.777 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "c43f7e6f-80d9-491d-a394-ed3d8387e266" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:51:22 compute-0 nova_compute[348325]: 2025-12-03 18:51:22.778 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "c43f7e6f-80d9-491d-a394-ed3d8387e266" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:51:22 compute-0 nova_compute[348325]: 2025-12-03 18:51:22.778 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "c43f7e6f-80d9-491d-a394-ed3d8387e266-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:51:22 compute-0 nova_compute[348325]: 2025-12-03 18:51:22.778 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "c43f7e6f-80d9-491d-a394-ed3d8387e266-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:51:22 compute-0 nova_compute[348325]: 2025-12-03 18:51:22.779 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "c43f7e6f-80d9-491d-a394-ed3d8387e266-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:51:22 compute-0 nova_compute[348325]: 2025-12-03 18:51:22.780 348329 INFO nova.compute.manager [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Terminating instance#033[00m
Dec  3 18:51:22 compute-0 nova_compute[348325]: 2025-12-03 18:51:22.780 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "refresh_cache-c43f7e6f-80d9-491d-a394-ed3d8387e266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:51:22 compute-0 nova_compute[348325]: 2025-12-03 18:51:22.781 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquired lock "refresh_cache-c43f7e6f-80d9-491d-a394-ed3d8387e266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:51:22 compute-0 nova_compute[348325]: 2025-12-03 18:51:22.781 348329 DEBUG nova.network.neutron [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:51:23 compute-0 nova_compute[348325]: 2025-12-03 18:51:23.119 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:51:23.347 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:51:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:51:23.348 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:51:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:51:23.350 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:51:23 compute-0 nova_compute[348325]: 2025-12-03 18:51:23.412 348329 DEBUG nova.network.neutron [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:51:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 4 op/s
Dec  3 18:51:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:23 compute-0 nova_compute[348325]: 2025-12-03 18:51:23.889 348329 DEBUG nova.network.neutron [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:51:23 compute-0 nova_compute[348325]: 2025-12-03 18:51:23.907 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Releasing lock "refresh_cache-c43f7e6f-80d9-491d-a394-ed3d8387e266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:51:23 compute-0 nova_compute[348325]: 2025-12-03 18:51:23.908 348329 DEBUG nova.compute.manager [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 18:51:24 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec  3 18:51:24 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 18.990s CPU time.
Dec  3 18:51:24 compute-0 systemd-machined[138702]: Machine qemu-5-instance-00000005 terminated.
Dec  3 18:51:24 compute-0 nova_compute[348325]: 2025-12-03 18:51:24.136 348329 INFO nova.virt.libvirt.driver [-] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Instance destroyed successfully.#033[00m
Dec  3 18:51:24 compute-0 nova_compute[348325]: 2025-12-03 18:51:24.137 348329 DEBUG nova.objects.instance [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'resources' on Instance uuid c43f7e6f-80d9-491d-a394-ed3d8387e266 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013717151583968223 of space, bias 1.0, pg target 0.4115145475190467 quantized to 32 (current 32)
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0005066271692062251 of space, bias 1.0, pg target 0.15198815076186756 quantized to 32 (current 32)
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:51:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:51:24 compute-0 nova_compute[348325]: 2025-12-03 18:51:24.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:51:24 compute-0 nova_compute[348325]: 2025-12-03 18:51:24.513 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:51:24 compute-0 nova_compute[348325]: 2025-12-03 18:51:24.514 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:51:24 compute-0 nova_compute[348325]: 2025-12-03 18:51:24.514 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:51:24 compute-0 nova_compute[348325]: 2025-12-03 18:51:24.514 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:51:24 compute-0 nova_compute[348325]: 2025-12-03 18:51:24.514 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:25 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:51:25 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201281542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.032 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.142 348329 INFO nova.virt.libvirt.driver [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Deleting instance files /var/lib/nova/instances/c43f7e6f-80d9-491d-a394-ed3d8387e266_del#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.144 348329 INFO nova.virt.libvirt.driver [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Deletion of /var/lib/nova/instances/c43f7e6f-80d9-491d-a394-ed3d8387e266_del complete#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.156 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.157 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.158 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.164 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.164 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.165 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.171 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.171 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.172 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.228 348329 INFO nova.compute.manager [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Took 1.32 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.229 348329 DEBUG oslo.service.loopingcall [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.230 348329 DEBUG nova.compute.manager [-] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.230 348329 DEBUG nova.network.neutron [-] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.400 348329 DEBUG nova.network.neutron [-] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.421 348329 DEBUG nova.network.neutron [-] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.431 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.440 348329 INFO nova.compute.manager [-] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Took 0.21 seconds to deallocate network for instance.#033[00m
Dec  3 18:51:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 180 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.480 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.481 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.577 348329 DEBUG oslo_concurrency.processutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.631 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.632 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3647MB free_disk=59.9059944152832GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:51:25 compute-0 nova_compute[348325]: 2025-12-03 18:51:25.633 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:51:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:51:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/244847531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.063 348329 DEBUG oslo_concurrency.processutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.073 348329 DEBUG nova.compute.provider_tree [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.092 348329 DEBUG nova.scheduler.client.report [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.118 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.120 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.169 348329 INFO nova.scheduler.client.report [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Deleted allocations for instance c43f7e6f-80d9-491d-a394-ed3d8387e266#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.220 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.221 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a6019a9c-c065-49d8-bef3-219bd2c79d8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.221 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.222 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.239 348329 DEBUG oslo_concurrency.lockutils [None req-bad42410-4d87-4816-be54-8ac78b9a1d58 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "c43f7e6f-80d9-491d-a394-ed3d8387e266" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.288 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:51:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:51:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2464054107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.736 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.746 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.775 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.808 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:51:26 compute-0 nova_compute[348325]: 2025-12-03 18:51:26.809 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:51:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 163 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.4 KiB/s wr, 18 op/s
Dec  3 18:51:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec  3 18:51:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec  3 18:51:27 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec  3 18:51:28 compute-0 nova_compute[348325]: 2025-12-03 18:51:28.122 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.741249) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787888741368, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2059, "num_deletes": 251, "total_data_size": 3423474, "memory_usage": 3485576, "flush_reason": "Manual Compaction"}
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787888773151, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3357375, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30160, "largest_seqno": 32218, "table_properties": {"data_size": 3347990, "index_size": 5943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18873, "raw_average_key_size": 20, "raw_value_size": 3329205, "raw_average_value_size": 3553, "num_data_blocks": 264, "num_entries": 937, "num_filter_entries": 937, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764787663, "oldest_key_time": 1764787663, "file_creation_time": 1764787888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 32034 microseconds, and 16629 cpu microseconds.
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.773284) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3357375 bytes OK
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.773311) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.775564) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.775584) EVENT_LOG_v1 {"time_micros": 1764787888775577, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.775608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3414858, prev total WAL file size 3414858, number of live WAL files 2.
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.777676) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3278KB)], [68(7197KB)]
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787888777758, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10727844, "oldest_snapshot_seqno": -1}
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5373 keys, 8982482 bytes, temperature: kUnknown
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787888864629, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8982482, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8945846, "index_size": 22126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 134726, "raw_average_key_size": 25, "raw_value_size": 8847937, "raw_average_value_size": 1646, "num_data_blocks": 912, "num_entries": 5373, "num_filter_entries": 5373, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764787888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.865005) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8982482 bytes
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.868848) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.3 rd, 103.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5891, records dropped: 518 output_compression: NoCompression
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.868879) EVENT_LOG_v1 {"time_micros": 1764787888868865, "job": 38, "event": "compaction_finished", "compaction_time_micros": 87017, "compaction_time_cpu_micros": 30031, "output_level": 6, "num_output_files": 1, "total_output_size": 8982482, "num_input_records": 5891, "num_output_records": 5373, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787888870262, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764787888873218, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.777496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.873808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.873852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.873855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.873858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:51:28 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:51:28.873861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:51:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 155 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.7 KiB/s wr, 45 op/s
Dec  3 18:51:29 compute-0 podman[158200]: time="2025-12-03T18:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:51:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:51:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8645 "" "Go-http-client/1.1"
Dec  3 18:51:30 compute-0 nova_compute[348325]: 2025-12-03 18:51:30.433 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:31 compute-0 openstack_network_exporter[365222]: ERROR   18:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:51:31 compute-0 openstack_network_exporter[365222]: ERROR   18:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:51:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:51:31 compute-0 openstack_network_exporter[365222]: ERROR   18:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:51:31 compute-0 openstack_network_exporter[365222]: ERROR   18:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:51:31 compute-0 openstack_network_exporter[365222]: ERROR   18:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:51:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:51:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.5 KiB/s wr, 61 op/s
Dec  3 18:51:33 compute-0 nova_compute[348325]: 2025-12-03 18:51:33.126 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Dec  3 18:51:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec  3 18:51:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec  3 18:51:33 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec  3 18:51:35 compute-0 nova_compute[348325]: 2025-12-03 18:51:35.435 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.2 KiB/s wr, 61 op/s
Dec  3 18:51:35 compute-0 podman[432725]: 2025-12-03 18:51:35.943755568 +0000 UTC m=+0.102621893 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:51:35 compute-0 podman[432726]: 2025-12-03 18:51:35.954668085 +0000 UTC m=+0.097230470 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Dec  3 18:51:35 compute-0 podman[432724]: 2025-12-03 18:51:35.970926434 +0000 UTC m=+0.127613955 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  3 18:51:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 KiB/s wr, 50 op/s
Dec  3 18:51:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:51:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1641545365' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:51:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:51:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1641545365' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:51:37 compute-0 podman[432795]: 2025-12-03 18:51:37.934610975 +0000 UTC m=+0.075942110 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 18:51:37 compute-0 podman[432789]: 2025-12-03 18:51:37.94827105 +0000 UTC m=+0.104554871 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 18:51:37 compute-0 podman[432790]: 2025-12-03 18:51:37.966520037 +0000 UTC m=+0.116448632 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:51:38 compute-0 nova_compute[348325]: 2025-12-03 18:51:38.129 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:39 compute-0 nova_compute[348325]: 2025-12-03 18:51:39.133 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764787884.131332, c43f7e6f-80d9-491d-a394-ed3d8387e266 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:51:39 compute-0 nova_compute[348325]: 2025-12-03 18:51:39.134 348329 INFO nova.compute.manager [-] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] VM Stopped (Lifecycle Event)#033[00m
Dec  3 18:51:39 compute-0 nova_compute[348325]: 2025-12-03 18:51:39.180 348329 DEBUG nova.compute.manager [None req-b259623c-2a57-4c54-beed-921d51253a75 - - - - - -] [instance: c43f7e6f-80d9-491d-a394-ed3d8387e266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:51:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 818 B/s wr, 26 op/s
Dec  3 18:51:40 compute-0 nova_compute[348325]: 2025-12-03 18:51:40.438 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:41 compute-0 systemd[1]: session-62.scope: Deactivated successfully.
Dec  3 18:51:41 compute-0 systemd-logind[784]: Session 62 logged out. Waiting for processes to exit.
Dec  3 18:51:41 compute-0 systemd-logind[784]: Removed session 62.
Dec  3 18:51:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 102 B/s wr, 12 op/s
Dec  3 18:51:43 compute-0 nova_compute[348325]: 2025-12-03 18:51:43.131 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 716 B/s wr, 5 op/s
Dec  3 18:51:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:51:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:51:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:51:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:51:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:51:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:51:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec  3 18:51:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec  3 18:51:44 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec  3 18:51:45 compute-0 nova_compute[348325]: 2025-12-03 18:51:45.441 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 716 B/s wr, 5 op/s
Dec  3 18:51:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 820 KiB/s wr, 15 op/s
Dec  3 18:51:48 compute-0 nova_compute[348325]: 2025-12-03 18:51:48.133 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec  3 18:51:49 compute-0 podman[432845]: 2025-12-03 18:51:49.933758525 +0000 UTC m=+0.092611538 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:51:50 compute-0 nova_compute[348325]: 2025-12-03 18:51:50.443 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Dec  3 18:51:51 compute-0 podman[432869]: 2025-12-03 18:51:51.918985546 +0000 UTC m=+0.080941763 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:51:51 compute-0 podman[432868]: 2025-12-03 18:51:51.981439786 +0000 UTC m=+0.136123655 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  3 18:51:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec  3 18:51:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec  3 18:51:52 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec  3 18:51:53 compute-0 nova_compute[348325]: 2025-12-03 18:51:53.135 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Dec  3 18:51:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:55 compute-0 nova_compute[348325]: 2025-12-03 18:51:55.447 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Dec  3 18:51:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 774 KiB/s wr, 23 op/s
Dec  3 18:51:58 compute-0 nova_compute[348325]: 2025-12-03 18:51:58.137 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:51:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:51:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec  3 18:51:59 compute-0 podman[158200]: time="2025-12-03T18:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:51:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:51:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8644 "" "Go-http-client/1.1"
Dec  3 18:52:00 compute-0 nova_compute[348325]: 2025-12-03 18:52:00.449 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:01 compute-0 openstack_network_exporter[365222]: ERROR   18:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:52:01 compute-0 openstack_network_exporter[365222]: ERROR   18:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:52:01 compute-0 openstack_network_exporter[365222]: ERROR   18:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:52:01 compute-0 openstack_network_exporter[365222]: ERROR   18:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:52:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:52:01 compute-0 openstack_network_exporter[365222]: ERROR   18:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:52:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:52:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec  3 18:52:02 compute-0 systemd-logind[784]: New session 63 of user zuul.
Dec  3 18:52:02 compute-0 systemd[1]: Started Session 63 of User zuul.
Dec  3 18:52:03 compute-0 python3[433092]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:52:03 compute-0 nova_compute[348325]: 2025-12-03 18:52:03.139 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Dec  3 18:52:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec  3 18:52:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec  3 18:52:04 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec  3 18:52:05 compute-0 nova_compute[348325]: 2025-12-03 18:52:05.452 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1023 B/s wr, 22 op/s
Dec  3 18:52:06 compute-0 podman[433131]: 2025-12-03 18:52:06.935009141 +0000 UTC m=+0.087357130 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:52:06 compute-0 podman[433132]: 2025-12-03 18:52:06.935947233 +0000 UTC m=+0.090562177 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:52:06 compute-0 podman[433133]: 2025-12-03 18:52:06.937085921 +0000 UTC m=+0.089981123 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 18:52:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 409 B/s wr, 4 op/s
Dec  3 18:52:08 compute-0 nova_compute[348325]: 2025-12-03 18:52:08.142 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:08 compute-0 podman[433194]: 2025-12-03 18:52:08.928915564 +0000 UTC m=+0.094224627 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 18:52:08 compute-0 podman[433195]: 2025-12-03 18:52:08.937984825 +0000 UTC m=+0.095722624 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:52:08 compute-0 podman[433193]: 2025-12-03 18:52:08.942021954 +0000 UTC m=+0.109800178 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 18:52:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:09 compute-0 nova_compute[348325]: 2025-12-03 18:52:09.800 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:52:10 compute-0 nova_compute[348325]: 2025-12-03 18:52:10.453 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:52:10 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:52:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:52:10 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:52:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:52:10 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:52:10 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4f8445e5-a118-4673-ba05-0bb60aed4627 does not exist
Dec  3 18:52:10 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 32f023f4-3ad6-44b1-9c4f-1beae30e4595 does not exist
Dec  3 18:52:10 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4936ba16-ffa5-42a0-90d6-4028c0d6f6cd does not exist
Dec  3 18:52:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:52:10 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:52:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:52:10 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:52:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:52:10 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:52:11 compute-0 python3[433604]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:52:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:11 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:52:11 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:52:11 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:52:11 compute-0 podman[433736]: 2025-12-03 18:52:11.758410994 +0000 UTC m=+0.068438052 container create 47a546c8561a82b7eceadc49377c9a1d1c76ecd736a97766f8a6cfa1016d6eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:52:11 compute-0 systemd[1]: Started libpod-conmon-47a546c8561a82b7eceadc49377c9a1d1c76ecd736a97766f8a6cfa1016d6eb3.scope.
Dec  3 18:52:11 compute-0 podman[433736]: 2025-12-03 18:52:11.731180699 +0000 UTC m=+0.041207767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:52:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:52:11 compute-0 podman[433736]: 2025-12-03 18:52:11.898393852 +0000 UTC m=+0.208420920 container init 47a546c8561a82b7eceadc49377c9a1d1c76ecd736a97766f8a6cfa1016d6eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:52:11 compute-0 podman[433736]: 2025-12-03 18:52:11.917317763 +0000 UTC m=+0.227344781 container start 47a546c8561a82b7eceadc49377c9a1d1c76ecd736a97766f8a6cfa1016d6eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jepsen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 18:52:11 compute-0 podman[433736]: 2025-12-03 18:52:11.92455523 +0000 UTC m=+0.234582308 container attach 47a546c8561a82b7eceadc49377c9a1d1c76ecd736a97766f8a6cfa1016d6eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jepsen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:52:11 compute-0 affectionate_jepsen[433751]: 167 167
Dec  3 18:52:11 compute-0 systemd[1]: libpod-47a546c8561a82b7eceadc49377c9a1d1c76ecd736a97766f8a6cfa1016d6eb3.scope: Deactivated successfully.
Dec  3 18:52:11 compute-0 podman[433736]: 2025-12-03 18:52:11.927736388 +0000 UTC m=+0.237763496 container died 47a546c8561a82b7eceadc49377c9a1d1c76ecd736a97766f8a6cfa1016d6eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 18:52:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4d8c9d34de997a4235d8ab28f036835792388835891c198fcd5749eafaab376-merged.mount: Deactivated successfully.
Dec  3 18:52:11 compute-0 podman[433736]: 2025-12-03 18:52:11.998181758 +0000 UTC m=+0.308208786 container remove 47a546c8561a82b7eceadc49377c9a1d1c76ecd736a97766f8a6cfa1016d6eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:52:12 compute-0 systemd[1]: libpod-conmon-47a546c8561a82b7eceadc49377c9a1d1c76ecd736a97766f8a6cfa1016d6eb3.scope: Deactivated successfully.
Dec  3 18:52:12 compute-0 podman[433776]: 2025-12-03 18:52:12.23991716 +0000 UTC m=+0.066731660 container create d53565a15459c635247354964d34cd2efe42240470a0f118dce2f2b76988dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:52:12 compute-0 systemd[1]: Started libpod-conmon-d53565a15459c635247354964d34cd2efe42240470a0f118dce2f2b76988dfc3.scope.
Dec  3 18:52:12 compute-0 podman[433776]: 2025-12-03 18:52:12.213201917 +0000 UTC m=+0.040016447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:52:12 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2dcd40c499274c49366b1b447b1daf48124d91177f9230f364a8151472ccf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2dcd40c499274c49366b1b447b1daf48124d91177f9230f364a8151472ccf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2dcd40c499274c49366b1b447b1daf48124d91177f9230f364a8151472ccf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2dcd40c499274c49366b1b447b1daf48124d91177f9230f364a8151472ccf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2dcd40c499274c49366b1b447b1daf48124d91177f9230f364a8151472ccf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:12 compute-0 podman[433776]: 2025-12-03 18:52:12.378197516 +0000 UTC m=+0.205012036 container init d53565a15459c635247354964d34cd2efe42240470a0f118dce2f2b76988dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:52:12 compute-0 podman[433776]: 2025-12-03 18:52:12.399409073 +0000 UTC m=+0.226223573 container start d53565a15459c635247354964d34cd2efe42240470a0f118dce2f2b76988dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:52:12 compute-0 podman[433776]: 2025-12-03 18:52:12.404018077 +0000 UTC m=+0.230832577 container attach d53565a15459c635247354964d34cd2efe42240470a0f118dce2f2b76988dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:52:13 compute-0 nova_compute[348325]: 2025-12-03 18:52:13.145 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:13 compute-0 nova_compute[348325]: 2025-12-03 18:52:13.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:52:13 compute-0 hungry_moser[433792]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:52:13 compute-0 hungry_moser[433792]: --> relative data size: 1.0
Dec  3 18:52:13 compute-0 hungry_moser[433792]: --> All data devices are unavailable
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:13 compute-0 systemd[1]: libpod-d53565a15459c635247354964d34cd2efe42240470a0f118dce2f2b76988dfc3.scope: Deactivated successfully.
Dec  3 18:52:13 compute-0 podman[433776]: 2025-12-03 18:52:13.515634147 +0000 UTC m=+1.342448647 container died d53565a15459c635247354964d34cd2efe42240470a0f118dce2f2b76988dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 18:52:13 compute-0 systemd[1]: libpod-d53565a15459c635247354964d34cd2efe42240470a0f118dce2f2b76988dfc3.scope: Consumed 1.044s CPU time.
Dec  3 18:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-63b2dcd40c499274c49366b1b447b1daf48124d91177f9230f364a8151472ccf-merged.mount: Deactivated successfully.
Dec  3 18:52:13 compute-0 podman[433776]: 2025-12-03 18:52:13.587846549 +0000 UTC m=+1.414661049 container remove d53565a15459c635247354964d34cd2efe42240470a0f118dce2f2b76988dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:52:13 compute-0 systemd[1]: libpod-conmon-d53565a15459c635247354964d34cd2efe42240470a0f118dce2f2b76988dfc3.scope: Deactivated successfully.
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:52:13
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms']
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:52:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:52:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:52:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:52:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:52:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:52:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:52:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:52:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:52:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:52:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:52:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:52:14 compute-0 nova_compute[348325]: 2025-12-03 18:52:14.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:52:14 compute-0 nova_compute[348325]: 2025-12-03 18:52:14.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:52:14 compute-0 podman[433974]: 2025-12-03 18:52:14.513598901 +0000 UTC m=+0.065245643 container create b35c4ece844a003ebedcdc42c6464d1430be2ffdbf427598fed1b81c47a018e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 18:52:14 compute-0 systemd[1]: Started libpod-conmon-b35c4ece844a003ebedcdc42c6464d1430be2ffdbf427598fed1b81c47a018e8.scope.
Dec  3 18:52:14 compute-0 podman[433974]: 2025-12-03 18:52:14.484909622 +0000 UTC m=+0.036556384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:52:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:52:14 compute-0 podman[433974]: 2025-12-03 18:52:14.625979096 +0000 UTC m=+0.177625858 container init b35c4ece844a003ebedcdc42c6464d1430be2ffdbf427598fed1b81c47a018e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:52:14 compute-0 podman[433974]: 2025-12-03 18:52:14.63517223 +0000 UTC m=+0.186818972 container start b35c4ece844a003ebedcdc42c6464d1430be2ffdbf427598fed1b81c47a018e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:52:14 compute-0 podman[433974]: 2025-12-03 18:52:14.639869645 +0000 UTC m=+0.191516407 container attach b35c4ece844a003ebedcdc42c6464d1430be2ffdbf427598fed1b81c47a018e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 18:52:14 compute-0 naughty_morse[433991]: 167 167
Dec  3 18:52:14 compute-0 systemd[1]: libpod-b35c4ece844a003ebedcdc42c6464d1430be2ffdbf427598fed1b81c47a018e8.scope: Deactivated successfully.
Dec  3 18:52:14 compute-0 conmon[433991]: conmon b35c4ece844a003ebedc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b35c4ece844a003ebedcdc42c6464d1430be2ffdbf427598fed1b81c47a018e8.scope/container/memory.events
Dec  3 18:52:14 compute-0 podman[433996]: 2025-12-03 18:52:14.710917259 +0000 UTC m=+0.049268644 container died b35c4ece844a003ebedcdc42c6464d1430be2ffdbf427598fed1b81c47a018e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 18:52:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1ad2e0ea33d836c448e0a7fb887446436a16027fd8414bec289b36211e5c0b4-merged.mount: Deactivated successfully.
Dec  3 18:52:14 compute-0 podman[433996]: 2025-12-03 18:52:14.776312206 +0000 UTC m=+0.114663441 container remove b35c4ece844a003ebedcdc42c6464d1430be2ffdbf427598fed1b81c47a018e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:52:14 compute-0 systemd[1]: libpod-conmon-b35c4ece844a003ebedcdc42c6464d1430be2ffdbf427598fed1b81c47a018e8.scope: Deactivated successfully.
Dec  3 18:52:15 compute-0 podman[434017]: 2025-12-03 18:52:15.057563953 +0000 UTC m=+0.072459090 container create 23012daffd17c859839d96302c23c4ef758c4cca95e1381d43e6c6821518d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:52:15 compute-0 podman[434017]: 2025-12-03 18:52:15.02263464 +0000 UTC m=+0.037529827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:52:15 compute-0 systemd[1]: Started libpod-conmon-23012daffd17c859839d96302c23c4ef758c4cca95e1381d43e6c6821518d717.scope.
Dec  3 18:52:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c90d6d126e3fb4ba4380cada3b5b66a1a9e0d329f9c50962328047d483433c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c90d6d126e3fb4ba4380cada3b5b66a1a9e0d329f9c50962328047d483433c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c90d6d126e3fb4ba4380cada3b5b66a1a9e0d329f9c50962328047d483433c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c90d6d126e3fb4ba4380cada3b5b66a1a9e0d329f9c50962328047d483433c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:15 compute-0 podman[434017]: 2025-12-03 18:52:15.216830642 +0000 UTC m=+0.231725829 container init 23012daffd17c859839d96302c23c4ef758c4cca95e1381d43e6c6821518d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:52:15 compute-0 podman[434017]: 2025-12-03 18:52:15.24587682 +0000 UTC m=+0.260771967 container start 23012daffd17c859839d96302c23c4ef758c4cca95e1381d43e6c6821518d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_beaver, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 18:52:15 compute-0 podman[434017]: 2025-12-03 18:52:15.250993285 +0000 UTC m=+0.265888432 container attach 23012daffd17c859839d96302c23c4ef758c4cca95e1381d43e6c6821518d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_beaver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:52:15 compute-0 nova_compute[348325]: 2025-12-03 18:52:15.455 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:15 compute-0 nova_compute[348325]: 2025-12-03 18:52:15.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:52:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]: {
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:    "0": [
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:        {
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "devices": [
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "/dev/loop3"
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            ],
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_name": "ceph_lv0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_size": "21470642176",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "name": "ceph_lv0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "tags": {
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.cluster_name": "ceph",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.crush_device_class": "",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.encrypted": "0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.osd_id": "0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.type": "block",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.vdo": "0"
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            },
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "type": "block",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "vg_name": "ceph_vg0"
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:        }
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:    ],
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:    "1": [
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:        {
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "devices": [
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "/dev/loop4"
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            ],
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_name": "ceph_lv1",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_size": "21470642176",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "name": "ceph_lv1",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "tags": {
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.cluster_name": "ceph",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.crush_device_class": "",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.encrypted": "0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.osd_id": "1",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.type": "block",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.vdo": "0"
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            },
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "type": "block",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "vg_name": "ceph_vg1"
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:        }
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:    ],
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:    "2": [
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:        {
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "devices": [
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "/dev/loop5"
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            ],
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_name": "ceph_lv2",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_size": "21470642176",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "name": "ceph_lv2",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "tags": {
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.cluster_name": "ceph",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.crush_device_class": "",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.encrypted": "0",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.osd_id": "2",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.type": "block",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:                "ceph.vdo": "0"
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            },
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "type": "block",
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:            "vg_name": "ceph_vg2"
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:        }
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]:    ]
Dec  3 18:52:16 compute-0 affectionate_beaver[434033]: }
Dec  3 18:52:16 compute-0 systemd[1]: libpod-23012daffd17c859839d96302c23c4ef758c4cca95e1381d43e6c6821518d717.scope: Deactivated successfully.
Dec  3 18:52:16 compute-0 podman[434017]: 2025-12-03 18:52:16.075829704 +0000 UTC m=+1.090724911 container died 23012daffd17c859839d96302c23c4ef758c4cca95e1381d43e6c6821518d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:52:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c90d6d126e3fb4ba4380cada3b5b66a1a9e0d329f9c50962328047d483433c-merged.mount: Deactivated successfully.
Dec  3 18:52:16 compute-0 podman[434017]: 2025-12-03 18:52:16.183874261 +0000 UTC m=+1.198769398 container remove 23012daffd17c859839d96302c23c4ef758c4cca95e1381d43e6c6821518d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_beaver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 18:52:16 compute-0 systemd[1]: libpod-conmon-23012daffd17c859839d96302c23c4ef758c4cca95e1381d43e6c6821518d717.scope: Deactivated successfully.
Dec  3 18:52:16 compute-0 nova_compute[348325]: 2025-12-03 18:52:16.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:52:16 compute-0 nova_compute[348325]: 2025-12-03 18:52:16.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:52:17 compute-0 podman[434192]: 2025-12-03 18:52:17.073782429 +0000 UTC m=+0.065231934 container create 9360cb87ca3651f3a30b93ce4bc187ba62fdde7d37532fb91e90bc04ee2caeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_meninsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:52:17 compute-0 systemd[1]: Started libpod-conmon-9360cb87ca3651f3a30b93ce4bc187ba62fdde7d37532fb91e90bc04ee2caeaf.scope.
Dec  3 18:52:17 compute-0 podman[434192]: 2025-12-03 18:52:17.048377689 +0000 UTC m=+0.039827194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:52:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:52:17 compute-0 podman[434192]: 2025-12-03 18:52:17.199201562 +0000 UTC m=+0.190651077 container init 9360cb87ca3651f3a30b93ce4bc187ba62fdde7d37532fb91e90bc04ee2caeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:52:17 compute-0 podman[434192]: 2025-12-03 18:52:17.214012293 +0000 UTC m=+0.205461748 container start 9360cb87ca3651f3a30b93ce4bc187ba62fdde7d37532fb91e90bc04ee2caeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_meninsky, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:52:17 compute-0 podman[434192]: 2025-12-03 18:52:17.218743789 +0000 UTC m=+0.210193264 container attach 9360cb87ca3651f3a30b93ce4bc187ba62fdde7d37532fb91e90bc04ee2caeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:52:17 compute-0 zealous_meninsky[434207]: 167 167
Dec  3 18:52:17 compute-0 systemd[1]: libpod-9360cb87ca3651f3a30b93ce4bc187ba62fdde7d37532fb91e90bc04ee2caeaf.scope: Deactivated successfully.
Dec  3 18:52:17 compute-0 podman[434192]: 2025-12-03 18:52:17.224839077 +0000 UTC m=+0.216288542 container died 9360cb87ca3651f3a30b93ce4bc187ba62fdde7d37532fb91e90bc04ee2caeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_meninsky, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:52:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8529e112cc71f99a06483ebf2a4b9c5fca6514700c0611277f3307926cc8d37-merged.mount: Deactivated successfully.
Dec  3 18:52:17 compute-0 podman[434192]: 2025-12-03 18:52:17.287049496 +0000 UTC m=+0.278498961 container remove 9360cb87ca3651f3a30b93ce4bc187ba62fdde7d37532fb91e90bc04ee2caeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:52:17 compute-0 systemd[1]: libpod-conmon-9360cb87ca3651f3a30b93ce4bc187ba62fdde7d37532fb91e90bc04ee2caeaf.scope: Deactivated successfully.
Dec  3 18:52:17 compute-0 podman[434230]: 2025-12-03 18:52:17.483672077 +0000 UTC m=+0.059053003 container create 68328e5f5dbc622567fa0b6a1dac3821587be186f1715f5798b92a9657112fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lumiere, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:52:17 compute-0 nova_compute[348325]: 2025-12-03 18:52:17.482 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:52:17 compute-0 nova_compute[348325]: 2025-12-03 18:52:17.483 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:52:17 compute-0 nova_compute[348325]: 2025-12-03 18:52:17.483 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:52:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:17 compute-0 systemd[1]: Started libpod-conmon-68328e5f5dbc622567fa0b6a1dac3821587be186f1715f5798b92a9657112fd7.scope.
Dec  3 18:52:17 compute-0 podman[434230]: 2025-12-03 18:52:17.458874481 +0000 UTC m=+0.034255437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:52:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be86639dcdcd1ec515e758e71b58d9b445c4f986ed07ac17a0e78cb28ef3c3b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be86639dcdcd1ec515e758e71b58d9b445c4f986ed07ac17a0e78cb28ef3c3b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be86639dcdcd1ec515e758e71b58d9b445c4f986ed07ac17a0e78cb28ef3c3b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be86639dcdcd1ec515e758e71b58d9b445c4f986ed07ac17a0e78cb28ef3c3b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:52:17 compute-0 podman[434230]: 2025-12-03 18:52:17.596376498 +0000 UTC m=+0.171757454 container init 68328e5f5dbc622567fa0b6a1dac3821587be186f1715f5798b92a9657112fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lumiere, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:52:17 compute-0 podman[434230]: 2025-12-03 18:52:17.605115531 +0000 UTC m=+0.180496447 container start 68328e5f5dbc622567fa0b6a1dac3821587be186f1715f5798b92a9657112fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:52:17 compute-0 podman[434230]: 2025-12-03 18:52:17.609729425 +0000 UTC m=+0.185110371 container attach 68328e5f5dbc622567fa0b6a1dac3821587be186f1715f5798b92a9657112fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lumiere, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 18:52:18 compute-0 nova_compute[348325]: 2025-12-03 18:52:18.152 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]: {
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "osd_id": 1,
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "type": "bluestore"
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:    },
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "osd_id": 2,
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "type": "bluestore"
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:    },
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "osd_id": 0,
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:        "type": "bluestore"
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]:    }
Dec  3 18:52:18 compute-0 goofy_lumiere[434246]: }
Dec  3 18:52:18 compute-0 systemd[1]: libpod-68328e5f5dbc622567fa0b6a1dac3821587be186f1715f5798b92a9657112fd7.scope: Deactivated successfully.
Dec  3 18:52:18 compute-0 podman[434230]: 2025-12-03 18:52:18.716181418 +0000 UTC m=+1.291562324 container died 68328e5f5dbc622567fa0b6a1dac3821587be186f1715f5798b92a9657112fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lumiere, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:52:18 compute-0 systemd[1]: libpod-68328e5f5dbc622567fa0b6a1dac3821587be186f1715f5798b92a9657112fd7.scope: Consumed 1.086s CPU time.
Dec  3 18:52:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-be86639dcdcd1ec515e758e71b58d9b445c4f986ed07ac17a0e78cb28ef3c3b3-merged.mount: Deactivated successfully.
Dec  3 18:52:18 compute-0 podman[434230]: 2025-12-03 18:52:18.778856349 +0000 UTC m=+1.354237255 container remove 68328e5f5dbc622567fa0b6a1dac3821587be186f1715f5798b92a9657112fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lumiere, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 18:52:18 compute-0 systemd[1]: libpod-conmon-68328e5f5dbc622567fa0b6a1dac3821587be186f1715f5798b92a9657112fd7.scope: Deactivated successfully.
Dec  3 18:52:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:52:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:52:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:52:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:52:18 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f3518815-fc9e-438e-938f-07a75ddb340e does not exist
Dec  3 18:52:18 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f911586f-c1ef-41da-87c5-82a6d1e65fde does not exist
Dec  3 18:52:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:52:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:52:20 compute-0 nova_compute[348325]: 2025-12-03 18:52:20.460 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:20 compute-0 nova_compute[348325]: 2025-12-03 18:52:20.497 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Updating instance_info_cache with network_info: [{"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:52:20 compute-0 nova_compute[348325]: 2025-12-03 18:52:20.516 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:52:20 compute-0 nova_compute[348325]: 2025-12-03 18:52:20.517 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:52:20 compute-0 nova_compute[348325]: 2025-12-03 18:52:20.519 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:52:20 compute-0 podman[434419]: 2025-12-03 18:52:20.957002468 +0000 UTC m=+0.120092482 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:52:21 compute-0 python3[434544]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:52:21 compute-0 nova_compute[348325]: 2025-12-03 18:52:21.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:52:21 compute-0 nova_compute[348325]: 2025-12-03 18:52:21.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:52:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:22 compute-0 podman[434584]: 2025-12-03 18:52:22.965475555 +0000 UTC m=+0.136233897 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  3 18:52:22 compute-0 podman[434585]: 2025-12-03 18:52:22.969917673 +0000 UTC m=+0.124365847 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125)
Dec  3 18:52:23 compute-0 nova_compute[348325]: 2025-12-03 18:52:23.155 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:52:23.348 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:52:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:52:23.349 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:52:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:52:23.351 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:52:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001104379822719281 of space, bias 1.0, pg target 0.33131394681578435 quantized to 32 (current 32)
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:52:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:52:24 compute-0 nova_compute[348325]: 2025-12-03 18:52:24.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:52:24 compute-0 nova_compute[348325]: 2025-12-03 18:52:24.521 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:52:24 compute-0 nova_compute[348325]: 2025-12-03 18:52:24.521 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:52:24 compute-0 nova_compute[348325]: 2025-12-03 18:52:24.521 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:52:24 compute-0 nova_compute[348325]: 2025-12-03 18:52:24.521 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:52:24 compute-0 nova_compute[348325]: 2025-12-03 18:52:24.522 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:52:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:52:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565172694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.024 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.281 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.282 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.284 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.291 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.292 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.292 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.463 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.805 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.806 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3631MB free_disk=59.92203140258789GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.806 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.807 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.927 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.928 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a6019a9c-c065-49d8-bef3-219bd2c79d8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.930 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.931 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.949 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing inventories for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.970 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating ProviderTree inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.971 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 18:52:25 compute-0 nova_compute[348325]: 2025-12-03 18:52:25.994 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing aggregate associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 18:52:26 compute-0 nova_compute[348325]: 2025-12-03 18:52:26.039 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing trait associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, traits: COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_AVX2,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 18:52:26 compute-0 nova_compute[348325]: 2025-12-03 18:52:26.094 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:52:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:52:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 7285 writes, 32K keys, 7285 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 7285 writes, 7285 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1340 writes, 6068 keys, 1340 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s#012Interval WAL: 1340 writes, 1340 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    100.1      0.39              0.19        19    0.021       0      0       0.0       0.0#012  L6      1/0    8.57 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.3    111.6     90.3      1.45              0.53        18    0.080     86K    10K       0.0       0.0#012 Sum      1/0    8.57 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.3     87.8     92.4      1.84              0.71        37    0.050     86K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    110.9    115.0      0.35              0.16         8    0.044     22K   2528       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    111.6     90.3      1.45              0.53        18    0.080     86K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    101.9      0.39              0.19        18    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.06 MB/s write, 0.16 GB read, 0.05 MB/s read, 1.8 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55911062f1f0#2 capacity: 308.00 MB usage: 19.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000143 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1281,19.10 MB,6.20143%) FilterBlock(38,248.17 KB,0.0786868%) IndexBlock(38,449.45 KB,0.142506%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 18:52:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:52:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280093683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:52:26 compute-0 nova_compute[348325]: 2025-12-03 18:52:26.554 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:52:26 compute-0 nova_compute[348325]: 2025-12-03 18:52:26.568 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:52:26 compute-0 nova_compute[348325]: 2025-12-03 18:52:26.582 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:52:26 compute-0 nova_compute[348325]: 2025-12-03 18:52:26.583 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:52:26 compute-0 nova_compute[348325]: 2025-12-03 18:52:26.584 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:52:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:27 compute-0 nova_compute[348325]: 2025-12-03 18:52:27.575 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:52:28 compute-0 nova_compute[348325]: 2025-12-03 18:52:28.158 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:29 compute-0 podman[158200]: time="2025-12-03T18:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:52:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:52:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Dec  3 18:52:30 compute-0 nova_compute[348325]: 2025-12-03 18:52:30.465 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:31 compute-0 openstack_network_exporter[365222]: ERROR   18:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:52:31 compute-0 openstack_network_exporter[365222]: ERROR   18:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:52:31 compute-0 openstack_network_exporter[365222]: ERROR   18:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:52:31 compute-0 openstack_network_exporter[365222]: ERROR   18:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:52:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:52:31 compute-0 openstack_network_exporter[365222]: ERROR   18:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:52:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:52:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:33 compute-0 nova_compute[348325]: 2025-12-03 18:52:33.161 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:35 compute-0 nova_compute[348325]: 2025-12-03 18:52:35.468 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:36 compute-0 python3[434844]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 18:52:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:52:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3868556659' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:52:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:52:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3868556659' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:52:38 compute-0 podman[434886]: 2025-12-03 18:52:38.137531108 +0000 UTC m=+0.300727073 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:52:38 compute-0 podman[434885]: 2025-12-03 18:52:38.163577865 +0000 UTC m=+0.318396516 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  3 18:52:38 compute-0 nova_compute[348325]: 2025-12-03 18:52:38.164 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:38 compute-0 podman[434887]: 2025-12-03 18:52:38.175588007 +0000 UTC m=+0.323668663 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 18:52:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:39 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:52:39 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 18:52:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:39 compute-0 podman[434950]: 2025-12-03 18:52:39.959007119 +0000 UTC m=+0.108306905 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 18:52:39 compute-0 podman[434949]: 2025-12-03 18:52:39.964029472 +0000 UTC m=+0.115900071 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.openshift.expose-services=)
Dec  3 18:52:39 compute-0 podman[434951]: 2025-12-03 18:52:39.98157956 +0000 UTC m=+0.124089981 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 18:52:40 compute-0 nova_compute[348325]: 2025-12-03 18:52:40.471 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:43 compute-0 nova_compute[348325]: 2025-12-03 18:52:43.165 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:52:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:52:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:52:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:52:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:52:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:52:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:45 compute-0 nova_compute[348325]: 2025-12-03 18:52:45.474 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:48 compute-0 nova_compute[348325]: 2025-12-03 18:52:48.168 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:50 compute-0 nova_compute[348325]: 2025-12-03 18:52:50.476 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:51 compute-0 podman[435008]: 2025-12-03 18:52:51.945059771 +0000 UTC m=+0.106291286 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:52:53 compute-0 nova_compute[348325]: 2025-12-03 18:52:53.171 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:53 compute-0 podman[435032]: 2025-12-03 18:52:53.980388843 +0000 UTC m=+0.136275778 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:52:53 compute-0 podman[435031]: 2025-12-03 18:52:53.98843506 +0000 UTC m=+0.141885395 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:52:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:55 compute-0 nova_compute[348325]: 2025-12-03 18:52:55.481 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:58 compute-0 nova_compute[348325]: 2025-12-03 18:52:58.174 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:52:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:52:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:52:59 compute-0 podman[158200]: time="2025-12-03T18:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:52:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:52:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8653 "" "Go-http-client/1.1"
Dec  3 18:53:00 compute-0 nova_compute[348325]: 2025-12-03 18:53:00.484 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:01 compute-0 openstack_network_exporter[365222]: ERROR   18:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:53:01 compute-0 openstack_network_exporter[365222]: ERROR   18:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:53:01 compute-0 openstack_network_exporter[365222]: ERROR   18:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:53:01 compute-0 openstack_network_exporter[365222]: ERROR   18:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:53:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:53:01 compute-0 openstack_network_exporter[365222]: ERROR   18:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:53:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:53:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:03 compute-0 nova_compute[348325]: 2025-12-03 18:53:03.177 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:05 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 18:53:05 compute-0 nova_compute[348325]: 2025-12-03 18:53:05.486 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:08 compute-0 nova_compute[348325]: 2025-12-03 18:53:08.179 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:08 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 18:53:08 compute-0 podman[435077]: 2025-12-03 18:53:08.562832302 +0000 UTC m=+0.089226819 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:53:08 compute-0 podman[435084]: 2025-12-03 18:53:08.601335443 +0000 UTC m=+0.096618550 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, name=ubi9-minimal, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Dec  3 18:53:08 compute-0 podman[435078]: 2025-12-03 18:53:08.606424246 +0000 UTC m=+0.117080738 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:53:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:09 compute-0 nova_compute[348325]: 2025-12-03 18:53:09.522 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:53:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:10 compute-0 nova_compute[348325]: 2025-12-03 18:53:10.488 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:10 compute-0 podman[435138]: 2025-12-03 18:53:10.935723767 +0000 UTC m=+0.097994384 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=)
Dec  3 18:53:10 compute-0 podman[435139]: 2025-12-03 18:53:10.941566409 +0000 UTC m=+0.091949396 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm)
Dec  3 18:53:10 compute-0 podman[435145]: 2025-12-03 18:53:10.954962777 +0000 UTC m=+0.097848780 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:53:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:13 compute-0 nova_compute[348325]: 2025-12-03 18:53:13.182 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.251 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.251 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.252 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.260 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1ca1fbdb-089c-4544-821e-0542089b8424', 'name': 'test_0', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.265 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6019a9c-c065-49d8-bef3-219bd2c79d8c', 'name': 'vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm', 'flavor': {'id': '6cb250a4-d28c-4125-888b-653b31e29275', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e68cd467-b4e6-45e0-8e55-984fda402294'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'user_id': '56338958b09445f5af9aa9e4601a1a8a', 'hostId': '233c08f520fd9700ef62a871bc5d558f2659759d89ea6c0726998878', 'status': 'active', 'metadata': {'metering.server_group': 'b322e118-e1cc-40be-8d8c-553648144092'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.266 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.266 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T18:53:13.266684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.273 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.278 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.279 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.279 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.279 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.279 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.279 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.279 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.280 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.280 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.280 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.280 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.280 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.280 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.280 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.281 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T18:53:13.279689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T18:53:13.280757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.281 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.281 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.281 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.282 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.282 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.282 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.282 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.282 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.283 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.283 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.283 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.283 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.283 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.283 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.284 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.284 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.284 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.284 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T18:53:13.282108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T18:53:13.283283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T18:53:13.284291) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.302 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.302 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.303 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.325 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.325 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.326 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.327 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.327 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T18:53:13.327433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.328 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.328 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.329 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T18:53:13.329594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.354 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.378 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.379 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.379 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T18:53:13.380183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.380 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.381 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.381 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.381 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.382 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.382 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.382 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.382 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.382 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.383 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T18:53:13.381820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T18:53:13.383839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.442 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.442 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.442 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.504 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.505 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.505 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.506 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.506 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.507 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 1682579508 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.507 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 260360075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.507 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.latency volume: 147233249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.508 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 1330892351 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.508 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 190600353 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T18:53:13.507077) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.509 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.latency volume: 156629474 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.510 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.510 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T18:53:13.510532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.511 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.511 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.511 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.512 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.512 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.513 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.513 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T18:53:13.513502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.514 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.514 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.514 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.514 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.515 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.515 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.516 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.516 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.516 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.516 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T18:53:13.516490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.517 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.517 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.517 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.517 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.518 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.518 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.519 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.519 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.519 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T18:53:13.519537) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.520 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.520 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.520 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.521 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.521 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T18:53:13.521138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.521 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 6303799002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.521 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 23959545 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.521 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.522 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 6164702929 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.522 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 24431067 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.522 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.523 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.523 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.523 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.523 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.524 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.524 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.524 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.525 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.525 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.526 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.526 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T18:53:13.523876) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.528 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.528 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T18:53:13.528525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.528 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.530 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T18:53:13.530570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.531 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.532 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T18:53:13.531867) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.532 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.532 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.532 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.533 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.533 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T18:53:13.533164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.534 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.534 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.535 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.535 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.535 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.536 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T18:53:13.534391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.536 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.536 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.536 14 DEBUG ceilometer.compute.pollsters [-] 1ca1fbdb-089c-4544-821e-0542089b8424/cpu volume: 47460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.537 14 DEBUG ceilometer.compute.pollsters [-] a6019a9c-c065-49d8-bef3-219bd2c79d8c/cpu volume: 43460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T18:53:13.536579) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:53:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:53:13
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.log', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta']
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:53:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:53:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:53:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:53:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:53:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:53:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:53:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:53:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:53:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:53:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:53:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:53:14 compute-0 nova_compute[348325]: 2025-12-03 18:53:14.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:53:14 compute-0 nova_compute[348325]: 2025-12-03 18:53:14.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:53:15 compute-0 nova_compute[348325]: 2025-12-03 18:53:15.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:53:15 compute-0 nova_compute[348325]: 2025-12-03 18:53:15.489 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:53:15 compute-0 nova_compute[348325]: 2025-12-03 18:53:15.490 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:17 compute-0 nova_compute[348325]: 2025-12-03 18:53:17.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:53:17 compute-0 nova_compute[348325]: 2025-12-03 18:53:17.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:53:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:17 compute-0 nova_compute[348325]: 2025-12-03 18:53:17.531 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 18:53:18 compute-0 nova_compute[348325]: 2025-12-03 18:53:18.185 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:18 compute-0 nova_compute[348325]: 2025-12-03 18:53:18.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:53:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:53:19 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:53:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:53:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:53:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:53:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:53:19 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 17f0fbb6-79da-413a-b726-6b7d9f37577c does not exist
Dec  3 18:53:19 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d0eefc90-3c99-4586-bde3-165e88c965d3 does not exist
Dec  3 18:53:19 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6534d796-9297-4f7b-b2b3-cd23924bcaee does not exist
Dec  3 18:53:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:53:19 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:53:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:53:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:53:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:53:19 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:53:20 compute-0 nova_compute[348325]: 2025-12-03 18:53:20.493 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:20 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:53:20 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:53:20 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:53:20 compute-0 podman[435461]: 2025-12-03 18:53:20.821196357 +0000 UTC m=+0.061871681 container create 968cb8c0a52b81e1c7a9db17f70cdf5249883c0ab55ae4ba623982fd37d58a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 18:53:20 compute-0 systemd[1]: Started libpod-conmon-968cb8c0a52b81e1c7a9db17f70cdf5249883c0ab55ae4ba623982fd37d58a94.scope.
Dec  3 18:53:20 compute-0 podman[435461]: 2025-12-03 18:53:20.800321027 +0000 UTC m=+0.040996371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:53:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:53:20 compute-0 podman[435461]: 2025-12-03 18:53:20.947611674 +0000 UTC m=+0.188287018 container init 968cb8c0a52b81e1c7a9db17f70cdf5249883c0ab55ae4ba623982fd37d58a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:53:20 compute-0 podman[435461]: 2025-12-03 18:53:20.958346256 +0000 UTC m=+0.199021580 container start 968cb8c0a52b81e1c7a9db17f70cdf5249883c0ab55ae4ba623982fd37d58a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 18:53:20 compute-0 podman[435461]: 2025-12-03 18:53:20.962747923 +0000 UTC m=+0.203423247 container attach 968cb8c0a52b81e1c7a9db17f70cdf5249883c0ab55ae4ba623982fd37d58a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:53:20 compute-0 modest_allen[435477]: 167 167
Dec  3 18:53:20 compute-0 systemd[1]: libpod-968cb8c0a52b81e1c7a9db17f70cdf5249883c0ab55ae4ba623982fd37d58a94.scope: Deactivated successfully.
Dec  3 18:53:21 compute-0 podman[435482]: 2025-12-03 18:53:21.048755934 +0000 UTC m=+0.053642672 container died 968cb8c0a52b81e1c7a9db17f70cdf5249883c0ab55ae4ba623982fd37d58a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:53:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8618573fbd20c465dab69d76d45e8cf5ed9c50f03a017dc4a61ae0d4d38f5b21-merged.mount: Deactivated successfully.
Dec  3 18:53:21 compute-0 podman[435482]: 2025-12-03 18:53:21.120234069 +0000 UTC m=+0.125120787 container remove 968cb8c0a52b81e1c7a9db17f70cdf5249883c0ab55ae4ba623982fd37d58a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:53:21 compute-0 systemd[1]: libpod-conmon-968cb8c0a52b81e1c7a9db17f70cdf5249883c0ab55ae4ba623982fd37d58a94.scope: Deactivated successfully.
Dec  3 18:53:21 compute-0 podman[435503]: 2025-12-03 18:53:21.348675086 +0000 UTC m=+0.068288428 container create 9813c5aa93a6c6ca689387cf34f83936bead754fb18040980d3ed79b2aec9483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:53:21 compute-0 systemd[1]: Started libpod-conmon-9813c5aa93a6c6ca689387cf34f83936bead754fb18040980d3ed79b2aec9483.scope.
Dec  3 18:53:21 compute-0 podman[435503]: 2025-12-03 18:53:21.323561633 +0000 UTC m=+0.043175015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:53:21 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371b6d9e4a51f7be103354bf4e0106c5d8df1daa25980b5ab029af928ca1367d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371b6d9e4a51f7be103354bf4e0106c5d8df1daa25980b5ab029af928ca1367d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371b6d9e4a51f7be103354bf4e0106c5d8df1daa25980b5ab029af928ca1367d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371b6d9e4a51f7be103354bf4e0106c5d8df1daa25980b5ab029af928ca1367d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371b6d9e4a51f7be103354bf4e0106c5d8df1daa25980b5ab029af928ca1367d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:21 compute-0 podman[435503]: 2025-12-03 18:53:21.44835717 +0000 UTC m=+0.167970532 container init 9813c5aa93a6c6ca689387cf34f83936bead754fb18040980d3ed79b2aec9483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:53:21 compute-0 podman[435503]: 2025-12-03 18:53:21.462950146 +0000 UTC m=+0.182563488 container start 9813c5aa93a6c6ca689387cf34f83936bead754fb18040980d3ed79b2aec9483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:53:21 compute-0 podman[435503]: 2025-12-03 18:53:21.496000343 +0000 UTC m=+0.215613715 container attach 9813c5aa93a6c6ca689387cf34f83936bead754fb18040980d3ed79b2aec9483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:53:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:22 compute-0 clever_blackwell[435519]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:53:22 compute-0 clever_blackwell[435519]: --> relative data size: 1.0
Dec  3 18:53:22 compute-0 clever_blackwell[435519]: --> All data devices are unavailable
Dec  3 18:53:22 compute-0 systemd[1]: libpod-9813c5aa93a6c6ca689387cf34f83936bead754fb18040980d3ed79b2aec9483.scope: Deactivated successfully.
Dec  3 18:53:22 compute-0 podman[435503]: 2025-12-03 18:53:22.629955278 +0000 UTC m=+1.349568620 container died 9813c5aa93a6c6ca689387cf34f83936bead754fb18040980d3ed79b2aec9483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:53:22 compute-0 systemd[1]: libpod-9813c5aa93a6c6ca689387cf34f83936bead754fb18040980d3ed79b2aec9483.scope: Consumed 1.070s CPU time.
Dec  3 18:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-371b6d9e4a51f7be103354bf4e0106c5d8df1daa25980b5ab029af928ca1367d-merged.mount: Deactivated successfully.
Dec  3 18:53:22 compute-0 podman[435503]: 2025-12-03 18:53:22.701351762 +0000 UTC m=+1.420965094 container remove 9813c5aa93a6c6ca689387cf34f83936bead754fb18040980d3ed79b2aec9483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:53:22 compute-0 systemd[1]: libpod-conmon-9813c5aa93a6c6ca689387cf34f83936bead754fb18040980d3ed79b2aec9483.scope: Deactivated successfully.
Dec  3 18:53:22 compute-0 podman[435548]: 2025-12-03 18:53:22.773023281 +0000 UTC m=+0.113882011 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:53:23 compute-0 nova_compute[348325]: 2025-12-03 18:53:23.187 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:53:23.349 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:53:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:53:23.350 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:53:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:53:23.350 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:53:23 compute-0 podman[435716]: 2025-12-03 18:53:23.439136095 +0000 UTC m=+0.051371716 container create 2c52eb324de6481ac4e390fdfbc1aeeea415f02a789f28de666dca7f384d57b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:53:23 compute-0 nova_compute[348325]: 2025-12-03 18:53:23.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:53:23 compute-0 nova_compute[348325]: 2025-12-03 18:53:23.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:53:23 compute-0 systemd[1]: Started libpod-conmon-2c52eb324de6481ac4e390fdfbc1aeeea415f02a789f28de666dca7f384d57b7.scope.
Dec  3 18:53:23 compute-0 podman[435716]: 2025-12-03 18:53:23.419987907 +0000 UTC m=+0.032223558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:53:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:23 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:53:23 compute-0 podman[435716]: 2025-12-03 18:53:23.552674827 +0000 UTC m=+0.164910488 container init 2c52eb324de6481ac4e390fdfbc1aeeea415f02a789f28de666dca7f384d57b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poitras, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:53:23 compute-0 podman[435716]: 2025-12-03 18:53:23.561147554 +0000 UTC m=+0.173383185 container start 2c52eb324de6481ac4e390fdfbc1aeeea415f02a789f28de666dca7f384d57b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poitras, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:53:23 compute-0 podman[435716]: 2025-12-03 18:53:23.565028119 +0000 UTC m=+0.177263780 container attach 2c52eb324de6481ac4e390fdfbc1aeeea415f02a789f28de666dca7f384d57b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:53:23 compute-0 dreamy_poitras[435731]: 167 167
Dec  3 18:53:23 compute-0 systemd[1]: libpod-2c52eb324de6481ac4e390fdfbc1aeeea415f02a789f28de666dca7f384d57b7.scope: Deactivated successfully.
Dec  3 18:53:23 compute-0 podman[435716]: 2025-12-03 18:53:23.568352109 +0000 UTC m=+0.180587740 container died 2c52eb324de6481ac4e390fdfbc1aeeea415f02a789f28de666dca7f384d57b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poitras, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 18:53:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-51e52dc4129c38bdd884f075f6b846276ef9026b4ba67f1c9e117f7c947d0108-merged.mount: Deactivated successfully.
Dec  3 18:53:23 compute-0 podman[435716]: 2025-12-03 18:53:23.609621578 +0000 UTC m=+0.221857209 container remove 2c52eb324de6481ac4e390fdfbc1aeeea415f02a789f28de666dca7f384d57b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poitras, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:53:23 compute-0 systemd[1]: libpod-conmon-2c52eb324de6481ac4e390fdfbc1aeeea415f02a789f28de666dca7f384d57b7.scope: Deactivated successfully.
Dec  3 18:53:23 compute-0 podman[435753]: 2025-12-03 18:53:23.806584606 +0000 UTC m=+0.060828146 container create d1f0ed7b2c0a6ef93c9b7fbcde3df2f5722ac5652edcfdae48cbce6e4be2fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 18:53:23 compute-0 systemd[1]: Started libpod-conmon-d1f0ed7b2c0a6ef93c9b7fbcde3df2f5722ac5652edcfdae48cbce6e4be2fdfa.scope.
Dec  3 18:53:23 compute-0 podman[435753]: 2025-12-03 18:53:23.784867596 +0000 UTC m=+0.039111166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:53:23 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e896ee79a65bf7eee2b0385c4c3db079ea71fc6c5b567ef36f8a5393544722a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e896ee79a65bf7eee2b0385c4c3db079ea71fc6c5b567ef36f8a5393544722a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e896ee79a65bf7eee2b0385c4c3db079ea71fc6c5b567ef36f8a5393544722a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e896ee79a65bf7eee2b0385c4c3db079ea71fc6c5b567ef36f8a5393544722a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:23 compute-0 podman[435753]: 2025-12-03 18:53:23.921205864 +0000 UTC m=+0.175449424 container init d1f0ed7b2c0a6ef93c9b7fbcde3df2f5722ac5652edcfdae48cbce6e4be2fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:53:23 compute-0 podman[435753]: 2025-12-03 18:53:23.938192009 +0000 UTC m=+0.192435549 container start d1f0ed7b2c0a6ef93c9b7fbcde3df2f5722ac5652edcfdae48cbce6e4be2fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:53:23 compute-0 podman[435753]: 2025-12-03 18:53:23.942610188 +0000 UTC m=+0.196853728 container attach d1f0ed7b2c0a6ef93c9b7fbcde3df2f5722ac5652edcfdae48cbce6e4be2fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:53:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.265060) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788004265126, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1202, "num_deletes": 257, "total_data_size": 1757956, "memory_usage": 1779728, "flush_reason": "Manual Compaction"}
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788004278712, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1740652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32219, "largest_seqno": 33420, "table_properties": {"data_size": 1734802, "index_size": 3181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12167, "raw_average_key_size": 19, "raw_value_size": 1723090, "raw_average_value_size": 2774, "num_data_blocks": 142, "num_entries": 621, "num_filter_entries": 621, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764787889, "oldest_key_time": 1764787889, "file_creation_time": 1764788004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 13706 microseconds, and 7396 cpu microseconds.
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.278772) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1740652 bytes OK
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.278792) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.281283) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.281298) EVENT_LOG_v1 {"time_micros": 1764788004281293, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.281316) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1752453, prev total WAL file size 1752453, number of live WAL files 2.
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.282180) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323537' seq:0, type:0; will stop at (end)
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1699KB)], [71(8771KB)]
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788004282234, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10723134, "oldest_snapshot_seqno": -1}
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5464 keys, 10618245 bytes, temperature: kUnknown
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788004351225, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 10618245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10578524, "index_size": 24955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 137547, "raw_average_key_size": 25, "raw_value_size": 10476590, "raw_average_value_size": 1917, "num_data_blocks": 1032, "num_entries": 5464, "num_filter_entries": 5464, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764788004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.351677) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 10618245 bytes
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.353782) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.8 rd, 153.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.6 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(12.3) write-amplify(6.1) OK, records in: 5994, records dropped: 530 output_compression: NoCompression
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.353799) EVENT_LOG_v1 {"time_micros": 1764788004353791, "job": 40, "event": "compaction_finished", "compaction_time_micros": 69287, "compaction_time_cpu_micros": 29052, "output_level": 6, "num_output_files": 1, "total_output_size": 10618245, "num_input_records": 5994, "num_output_records": 5464, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788004355197, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788004356965, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.282020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.357276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.357280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.357282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.357283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:24 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:24.357284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001104379822719281 of space, bias 1.0, pg target 0.33131394681578435 quantized to 32 (current 32)
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:53:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]: {
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:    "0": [
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:        {
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "devices": [
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "/dev/loop3"
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            ],
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_name": "ceph_lv0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_size": "21470642176",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "name": "ceph_lv0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "tags": {
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.cluster_name": "ceph",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.crush_device_class": "",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.encrypted": "0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.osd_id": "0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.type": "block",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.vdo": "0"
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            },
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "type": "block",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "vg_name": "ceph_vg0"
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:        }
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:    ],
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:    "1": [
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:        {
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "devices": [
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "/dev/loop4"
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            ],
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_name": "ceph_lv1",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_size": "21470642176",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "name": "ceph_lv1",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "tags": {
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.cluster_name": "ceph",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.crush_device_class": "",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.encrypted": "0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.osd_id": "1",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.type": "block",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.vdo": "0"
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            },
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "type": "block",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "vg_name": "ceph_vg1"
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:        }
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:    ],
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:    "2": [
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:        {
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "devices": [
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "/dev/loop5"
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            ],
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_name": "ceph_lv2",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_size": "21470642176",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "name": "ceph_lv2",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "tags": {
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.cluster_name": "ceph",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.crush_device_class": "",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.encrypted": "0",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.osd_id": "2",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.type": "block",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:                "ceph.vdo": "0"
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            },
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "type": "block",
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:            "vg_name": "ceph_vg2"
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:        }
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]:    ]
Dec  3 18:53:24 compute-0 distracted_mendeleev[435770]: }
Dec  3 18:53:24 compute-0 systemd[1]: libpod-d1f0ed7b2c0a6ef93c9b7fbcde3df2f5722ac5652edcfdae48cbce6e4be2fdfa.scope: Deactivated successfully.
Dec  3 18:53:24 compute-0 podman[435753]: 2025-12-03 18:53:24.768626575 +0000 UTC m=+1.022870165 container died d1f0ed7b2c0a6ef93c9b7fbcde3df2f5722ac5652edcfdae48cbce6e4be2fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e896ee79a65bf7eee2b0385c4c3db079ea71fc6c5b567ef36f8a5393544722a-merged.mount: Deactivated successfully.
Dec  3 18:53:24 compute-0 podman[435753]: 2025-12-03 18:53:24.840013248 +0000 UTC m=+1.094256788 container remove d1f0ed7b2c0a6ef93c9b7fbcde3df2f5722ac5652edcfdae48cbce6e4be2fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:53:24 compute-0 systemd[1]: libpod-conmon-d1f0ed7b2c0a6ef93c9b7fbcde3df2f5722ac5652edcfdae48cbce6e4be2fdfa.scope: Deactivated successfully.
Dec  3 18:53:24 compute-0 podman[435787]: 2025-12-03 18:53:24.903203561 +0000 UTC m=+0.096611450 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:53:24 compute-0 podman[435780]: 2025-12-03 18:53:24.970862242 +0000 UTC m=+0.165230095 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:53:25 compute-0 nova_compute[348325]: 2025-12-03 18:53:25.495 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:25 compute-0 podman[435971]: 2025-12-03 18:53:25.737180672 +0000 UTC m=+0.057282640 container create 984c24f87e41157a86ab6ac329e6cc109a1e62c533bc05be9b4fa1f54e687c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:53:25 compute-0 systemd[1]: Started libpod-conmon-984c24f87e41157a86ab6ac329e6cc109a1e62c533bc05be9b4fa1f54e687c95.scope.
Dec  3 18:53:25 compute-0 podman[435971]: 2025-12-03 18:53:25.715670067 +0000 UTC m=+0.035772065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:53:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:53:25 compute-0 podman[435971]: 2025-12-03 18:53:25.841803626 +0000 UTC m=+0.161905594 container init 984c24f87e41157a86ab6ac329e6cc109a1e62c533bc05be9b4fa1f54e687c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 18:53:25 compute-0 podman[435971]: 2025-12-03 18:53:25.852100787 +0000 UTC m=+0.172202735 container start 984c24f87e41157a86ab6ac329e6cc109a1e62c533bc05be9b4fa1f54e687c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:53:25 compute-0 podman[435971]: 2025-12-03 18:53:25.855762407 +0000 UTC m=+0.175864445 container attach 984c24f87e41157a86ab6ac329e6cc109a1e62c533bc05be9b4fa1f54e687c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:53:25 compute-0 crazy_driscoll[435986]: 167 167
Dec  3 18:53:25 compute-0 systemd[1]: libpod-984c24f87e41157a86ab6ac329e6cc109a1e62c533bc05be9b4fa1f54e687c95.scope: Deactivated successfully.
Dec  3 18:53:25 compute-0 conmon[435986]: conmon 984c24f87e41157a86ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-984c24f87e41157a86ab6ac329e6cc109a1e62c533bc05be9b4fa1f54e687c95.scope/container/memory.events
Dec  3 18:53:25 compute-0 podman[435971]: 2025-12-03 18:53:25.861480307 +0000 UTC m=+0.181582245 container died 984c24f87e41157a86ab6ac329e6cc109a1e62c533bc05be9b4fa1f54e687c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:53:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-493cf838ccb15908de9700c49a3ec3fe3e08caf5fabff86408112b4e0ace413a-merged.mount: Deactivated successfully.
Dec  3 18:53:25 compute-0 podman[435971]: 2025-12-03 18:53:25.926022392 +0000 UTC m=+0.246124370 container remove 984c24f87e41157a86ab6ac329e6cc109a1e62c533bc05be9b4fa1f54e687c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:53:25 compute-0 systemd[1]: libpod-conmon-984c24f87e41157a86ab6ac329e6cc109a1e62c533bc05be9b4fa1f54e687c95.scope: Deactivated successfully.
Dec  3 18:53:26 compute-0 podman[436010]: 2025-12-03 18:53:26.165663033 +0000 UTC m=+0.068911563 container create a5412e619392ae71c9d3d3549b9bb89006ed374fbcde42092169447a6546354a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:53:26 compute-0 systemd[1]: Started libpod-conmon-a5412e619392ae71c9d3d3549b9bb89006ed374fbcde42092169447a6546354a.scope.
Dec  3 18:53:26 compute-0 podman[436010]: 2025-12-03 18:53:26.142728684 +0000 UTC m=+0.045977304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:53:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2bdf703d073beac5773d565996fdf34baf2f9524dc3a27fc7dd07f4a79580bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2bdf703d073beac5773d565996fdf34baf2f9524dc3a27fc7dd07f4a79580bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2bdf703d073beac5773d565996fdf34baf2f9524dc3a27fc7dd07f4a79580bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2bdf703d073beac5773d565996fdf34baf2f9524dc3a27fc7dd07f4a79580bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:53:26 compute-0 podman[436010]: 2025-12-03 18:53:26.287865457 +0000 UTC m=+0.191114007 container init a5412e619392ae71c9d3d3549b9bb89006ed374fbcde42092169447a6546354a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:53:26 compute-0 podman[436010]: 2025-12-03 18:53:26.306162204 +0000 UTC m=+0.209410734 container start a5412e619392ae71c9d3d3549b9bb89006ed374fbcde42092169447a6546354a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:53:26 compute-0 podman[436010]: 2025-12-03 18:53:26.311573146 +0000 UTC m=+0.214821716 container attach a5412e619392ae71c9d3d3549b9bb89006ed374fbcde42092169447a6546354a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 18:53:26 compute-0 nova_compute[348325]: 2025-12-03 18:53:26.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:53:26 compute-0 nova_compute[348325]: 2025-12-03 18:53:26.530 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:53:26 compute-0 nova_compute[348325]: 2025-12-03 18:53:26.533 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:53:26 compute-0 nova_compute[348325]: 2025-12-03 18:53:26.534 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:53:26 compute-0 nova_compute[348325]: 2025-12-03 18:53:26.535 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:53:26 compute-0 nova_compute[348325]: 2025-12-03 18:53:26.536 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:53:26 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:53:26 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2523954177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.011 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.123 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.124 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.124 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.129 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.129 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.130 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:53:27 compute-0 condescending_hermann[436026]: {
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "osd_id": 1,
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "type": "bluestore"
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:    },
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "osd_id": 2,
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "type": "bluestore"
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:    },
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "osd_id": 0,
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:        "type": "bluestore"
Dec  3 18:53:27 compute-0 condescending_hermann[436026]:    }
Dec  3 18:53:27 compute-0 condescending_hermann[436026]: }
Dec  3 18:53:27 compute-0 systemd[1]: libpod-a5412e619392ae71c9d3d3549b9bb89006ed374fbcde42092169447a6546354a.scope: Deactivated successfully.
Dec  3 18:53:27 compute-0 systemd[1]: libpod-a5412e619392ae71c9d3d3549b9bb89006ed374fbcde42092169447a6546354a.scope: Consumed 1.087s CPU time.
Dec  3 18:53:27 compute-0 podman[436010]: 2025-12-03 18:53:27.426566769 +0000 UTC m=+1.329815309 container died a5412e619392ae71c9d3d3549b9bb89006ed374fbcde42092169447a6546354a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2bdf703d073beac5773d565996fdf34baf2f9524dc3a27fc7dd07f4a79580bc-merged.mount: Deactivated successfully.
Dec  3 18:53:27 compute-0 podman[436010]: 2025-12-03 18:53:27.50934502 +0000 UTC m=+1.412593550 container remove a5412e619392ae71c9d3d3549b9bb89006ed374fbcde42092169447a6546354a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.520 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.521 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3549MB free_disk=59.92203140258789GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.522 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.522 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:53:27 compute-0 systemd[1]: libpod-conmon-a5412e619392ae71c9d3d3549b9bb89006ed374fbcde42092169447a6546354a.scope: Deactivated successfully.
Dec  3 18:53:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:53:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:53:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:53:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:53:27 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 02eca05a-0d1c-4257-80a4-a1ee3fe5fe86 does not exist
Dec  3 18:53:27 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 2c2c325f-904b-4da9-be2d-c0a8a066e71a does not exist
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.637 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.638 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a6019a9c-c065-49d8-bef3-219bd2c79d8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.638 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.639 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:53:27 compute-0 nova_compute[348325]: 2025-12-03 18:53:27.684 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:53:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:53:28 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145990991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:53:28 compute-0 nova_compute[348325]: 2025-12-03 18:53:28.127 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:53:28 compute-0 nova_compute[348325]: 2025-12-03 18:53:28.143 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:53:28 compute-0 nova_compute[348325]: 2025-12-03 18:53:28.188 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:53:28 compute-0 nova_compute[348325]: 2025-12-03 18:53:28.192 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:53:28 compute-0 nova_compute[348325]: 2025-12-03 18:53:28.192 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:53:28 compute-0 nova_compute[348325]: 2025-12-03 18:53:28.193 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:53:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:53:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:29 compute-0 podman[158200]: time="2025-12-03T18:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:53:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:53:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8649 "" "Go-http-client/1.1"
Dec  3 18:53:30 compute-0 nova_compute[348325]: 2025-12-03 18:53:30.499 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:31 compute-0 openstack_network_exporter[365222]: ERROR   18:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:53:31 compute-0 openstack_network_exporter[365222]: ERROR   18:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:53:31 compute-0 openstack_network_exporter[365222]: ERROR   18:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:53:31 compute-0 openstack_network_exporter[365222]: ERROR   18:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:53:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:53:31 compute-0 openstack_network_exporter[365222]: ERROR   18:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:53:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:53:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:33 compute-0 nova_compute[348325]: 2025-12-03 18:53:33.193 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:35 compute-0 nova_compute[348325]: 2025-12-03 18:53:35.502 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:36 compute-0 systemd[1]: session-63.scope: Deactivated successfully.
Dec  3 18:53:36 compute-0 systemd[1]: session-63.scope: Consumed 4.163s CPU time.
Dec  3 18:53:36 compute-0 systemd-logind[784]: Session 63 logged out. Waiting for processes to exit.
Dec  3 18:53:36 compute-0 systemd-logind[784]: Removed session 63.
Dec  3 18:53:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:53:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1057483680' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:53:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:53:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1057483680' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:53:38 compute-0 nova_compute[348325]: 2025-12-03 18:53:38.195 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:38 compute-0 podman[436169]: 2025-12-03 18:53:38.971289077 +0000 UTC m=+0.121724563 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:53:38 compute-0 podman[436168]: 2025-12-03 18:53:38.973763997 +0000 UTC m=+0.124089111 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  3 18:53:38 compute-0 podman[436170]: 2025-12-03 18:53:38.982096581 +0000 UTC m=+0.126701984 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, version=9.6, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal)
Dec  3 18:53:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:40 compute-0 nova_compute[348325]: 2025-12-03 18:53:40.505 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:41 compute-0 podman[436232]: 2025-12-03 18:53:41.933731665 +0000 UTC m=+0.092352466 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:53:41 compute-0 podman[436233]: 2025-12-03 18:53:41.93597162 +0000 UTC m=+0.091290630 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:53:41 compute-0 podman[436231]: 2025-12-03 18:53:41.957766751 +0000 UTC m=+0.116679349 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, config_id=edpm, com.redhat.component=ubi9-container)
Dec  3 18:53:43 compute-0 nova_compute[348325]: 2025-12-03 18:53:43.197 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:53:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:53:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:53:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:53:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:53:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:53:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:45 compute-0 nova_compute[348325]: 2025-12-03 18:53:45.507 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:48 compute-0 nova_compute[348325]: 2025-12-03 18:53:48.198 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:50 compute-0 nova_compute[348325]: 2025-12-03 18:53:50.509 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:52 compute-0 podman[436290]: 2025-12-03 18:53:52.946283228 +0000 UTC m=+0.113925982 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:53:53 compute-0 nova_compute[348325]: 2025-12-03 18:53:53.201 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.276159) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788034276222, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 491, "num_deletes": 250, "total_data_size": 480963, "memory_usage": 489208, "flush_reason": "Manual Compaction"}
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788034280897, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 358285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33421, "largest_seqno": 33911, "table_properties": {"data_size": 355663, "index_size": 658, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6947, "raw_average_key_size": 20, "raw_value_size": 350364, "raw_average_value_size": 1033, "num_data_blocks": 29, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764788004, "oldest_key_time": 1764788004, "file_creation_time": 1764788034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 4778 microseconds, and 1773 cpu microseconds.
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.280949) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 358285 bytes OK
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.280962) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.283540) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.283552) EVENT_LOG_v1 {"time_micros": 1764788034283549, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.283567) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 478092, prev total WAL file size 478092, number of live WAL files 2.
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.284328) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(349KB)], [74(10MB)]
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788034284384, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 10976530, "oldest_snapshot_seqno": -1}
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5302 keys, 7795475 bytes, temperature: kUnknown
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788034346944, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 7795475, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7761312, "index_size": 19781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 134356, "raw_average_key_size": 25, "raw_value_size": 7666644, "raw_average_value_size": 1445, "num_data_blocks": 817, "num_entries": 5302, "num_filter_entries": 5302, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764788034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.347165) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 7795475 bytes
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.349293) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.3 rd, 124.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(52.4) write-amplify(21.8) OK, records in: 5803, records dropped: 501 output_compression: NoCompression
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.349314) EVENT_LOG_v1 {"time_micros": 1764788034349306, "job": 42, "event": "compaction_finished", "compaction_time_micros": 62619, "compaction_time_cpu_micros": 35921, "output_level": 6, "num_output_files": 1, "total_output_size": 7795475, "num_input_records": 5803, "num_output_records": 5302, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788034349543, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788034351871, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.284170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.352158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.352163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.352164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.352166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:54 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:53:54.352167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:53:55 compute-0 nova_compute[348325]: 2025-12-03 18:53:55.512 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:55 compute-0 podman[436315]: 2025-12-03 18:53:55.978748995 +0000 UTC m=+0.124015938 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec  3 18:53:56 compute-0 podman[436314]: 2025-12-03 18:53:56.011243958 +0000 UTC m=+0.173574007 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 18:53:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:58 compute-0 nova_compute[348325]: 2025-12-03 18:53:58.204 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:53:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:53:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:53:59 compute-0 podman[158200]: time="2025-12-03T18:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:53:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:53:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8646 "" "Go-http-client/1.1"
Dec  3 18:54:00 compute-0 nova_compute[348325]: 2025-12-03 18:54:00.514 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:01 compute-0 openstack_network_exporter[365222]: ERROR   18:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:54:01 compute-0 openstack_network_exporter[365222]: ERROR   18:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:54:01 compute-0 openstack_network_exporter[365222]: ERROR   18:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:54:01 compute-0 openstack_network_exporter[365222]: ERROR   18:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:54:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:54:01 compute-0 openstack_network_exporter[365222]: ERROR   18:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:54:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:54:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:03 compute-0 nova_compute[348325]: 2025-12-03 18:54:03.208 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:05 compute-0 nova_compute[348325]: 2025-12-03 18:54:05.517 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:08 compute-0 nova_compute[348325]: 2025-12-03 18:54:08.211 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:09 compute-0 podman[436359]: 2025-12-03 18:54:09.980868048 +0000 UTC m=+0.116000454 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:54:09 compute-0 podman[436360]: 2025-12-03 18:54:09.996560411 +0000 UTC m=+0.125879065 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 18:54:09 compute-0 podman[436358]: 2025-12-03 18:54:09.997181785 +0000 UTC m=+0.133063829 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  3 18:54:10 compute-0 nova_compute[348325]: 2025-12-03 18:54:10.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:10 compute-0 nova_compute[348325]: 2025-12-03 18:54:10.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:10 compute-0 nova_compute[348325]: 2025-12-03 18:54:10.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 18:54:10 compute-0 nova_compute[348325]: 2025-12-03 18:54:10.520 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:10 compute-0 nova_compute[348325]: 2025-12-03 18:54:10.529 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 18:54:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:12 compute-0 podman[436418]: 2025-12-03 18:54:12.91737009 +0000 UTC m=+0.081786918 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0)
Dec  3 18:54:12 compute-0 podman[436419]: 2025-12-03 18:54:12.945985839 +0000 UTC m=+0.095973945 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:54:12 compute-0 podman[436420]: 2025-12-03 18:54:12.958147585 +0000 UTC m=+0.105486666 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:54:13 compute-0 nova_compute[348325]: 2025-12-03 18:54:13.215 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:13 compute-0 nova_compute[348325]: 2025-12-03 18:54:13.602 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:54:13
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['images', '.rgw.root', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta']
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:54:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:54:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:54:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:54:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:54:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:54:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:54:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:54:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:54:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:54:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:54:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:54:15 compute-0 nova_compute[348325]: 2025-12-03 18:54:15.522 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:15 compute-0 nova_compute[348325]: 2025-12-03 18:54:15.528 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:15 compute-0 nova_compute[348325]: 2025-12-03 18:54:15.529 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:16 compute-0 nova_compute[348325]: 2025-12-03 18:54:16.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:16 compute-0 nova_compute[348325]: 2025-12-03 18:54:16.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:17 compute-0 nova_compute[348325]: 2025-12-03 18:54:17.512 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:17 compute-0 nova_compute[348325]: 2025-12-03 18:54:17.513 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:54:17 compute-0 nova_compute[348325]: 2025-12-03 18:54:17.513 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:54:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:18 compute-0 nova_compute[348325]: 2025-12-03 18:54:18.219 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:18 compute-0 nova_compute[348325]: 2025-12-03 18:54:18.532 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:54:18 compute-0 nova_compute[348325]: 2025-12-03 18:54:18.532 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:54:18 compute-0 nova_compute[348325]: 2025-12-03 18:54:18.532 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:54:18 compute-0 nova_compute[348325]: 2025-12-03 18:54:18.533 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ca1fbdb-089c-4544-821e-0542089b8424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:54:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:54:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 7549 writes, 29K keys, 7549 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7549 writes, 1713 syncs, 4.41 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 894 writes, 2717 keys, 894 commit groups, 1.0 writes per commit group, ingest: 2.34 MB, 0.00 MB/s#012Interval WAL: 894 writes, 387 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 18:54:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:20 compute-0 nova_compute[348325]: 2025-12-03 18:54:20.526 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:21 compute-0 nova_compute[348325]: 2025-12-03 18:54:21.580 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [{"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:54:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:21 compute-0 nova_compute[348325]: 2025-12-03 18:54:21.600 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-1ca1fbdb-089c-4544-821e-0542089b8424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:54:21 compute-0 nova_compute[348325]: 2025-12-03 18:54:21.601 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:54:21 compute-0 nova_compute[348325]: 2025-12-03 18:54:21.602 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:21 compute-0 nova_compute[348325]: 2025-12-03 18:54:21.602 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:22 compute-0 nova_compute[348325]: 2025-12-03 18:54:22.537 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:22 compute-0 nova_compute[348325]: 2025-12-03 18:54:22.560 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Triggering sync for uuid 1ca1fbdb-089c-4544-821e-0542089b8424 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 18:54:22 compute-0 nova_compute[348325]: 2025-12-03 18:54:22.561 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Triggering sync for uuid a6019a9c-c065-49d8-bef3-219bd2c79d8c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 18:54:22 compute-0 nova_compute[348325]: 2025-12-03 18:54:22.562 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "1ca1fbdb-089c-4544-821e-0542089b8424" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:54:22 compute-0 nova_compute[348325]: 2025-12-03 18:54:22.562 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "1ca1fbdb-089c-4544-821e-0542089b8424" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:54:22 compute-0 nova_compute[348325]: 2025-12-03 18:54:22.563 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:54:22 compute-0 nova_compute[348325]: 2025-12-03 18:54:22.564 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:54:22 compute-0 nova_compute[348325]: 2025-12-03 18:54:22.601 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "1ca1fbdb-089c-4544-821e-0542089b8424" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:54:22 compute-0 nova_compute[348325]: 2025-12-03 18:54:22.602 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:54:23 compute-0 nova_compute[348325]: 2025-12-03 18:54:23.220 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:23.350 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:54:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:23.351 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:54:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:23.351 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:54:23 compute-0 nova_compute[348325]: 2025-12-03 18:54:23.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:23 compute-0 nova_compute[348325]: 2025-12-03 18:54:23.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:54:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:23 compute-0 podman[436478]: 2025-12-03 18:54:23.993059379 +0000 UTC m=+0.141748421 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:54:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001104379822719281 of space, bias 1.0, pg target 0.33131394681578435 quantized to 32 (current 32)
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:54:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:54:25 compute-0 nova_compute[348325]: 2025-12-03 18:54:25.529 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:26 compute-0 nova_compute[348325]: 2025-12-03 18:54:26.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:26 compute-0 nova_compute[348325]: 2025-12-03 18:54:26.532 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:54:26 compute-0 nova_compute[348325]: 2025-12-03 18:54:26.533 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:54:26 compute-0 nova_compute[348325]: 2025-12-03 18:54:26.533 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:54:26 compute-0 nova_compute[348325]: 2025-12-03 18:54:26.534 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:54:26 compute-0 nova_compute[348325]: 2025-12-03 18:54:26.535 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:54:26 compute-0 podman[436522]: 2025-12-03 18:54:26.974302476 +0000 UTC m=+0.140792709 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:54:26 compute-0 podman[436523]: 2025-12-03 18:54:26.974744986 +0000 UTC m=+0.129960024 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:54:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:54:27 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3273788717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.048 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.153 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.153 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.154 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.159 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.160 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.160 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:54:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.610 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.612 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3659MB free_disk=59.92203140258789GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.612 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.613 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.892 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 1ca1fbdb-089c-4544-821e-0542089b8424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.893 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a6019a9c-c065-49d8-bef3-219bd2c79d8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.893 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:54:27 compute-0 nova_compute[348325]: 2025-12-03 18:54:27.894 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:54:28 compute-0 nova_compute[348325]: 2025-12-03 18:54:28.060 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:54:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:54:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 8440 writes, 32K keys, 8440 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8440 writes, 1958 syncs, 4.31 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 697 writes, 1877 keys, 697 commit groups, 1.0 writes per commit group, ingest: 1.00 MB, 0.00 MB/s#012Interval WAL: 697 writes, 315 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 18:54:28 compute-0 nova_compute[348325]: 2025-12-03 18:54:28.224 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:54:28 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901487264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:54:28 compute-0 nova_compute[348325]: 2025-12-03 18:54:28.577 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:54:28 compute-0 nova_compute[348325]: 2025-12-03 18:54:28.588 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:54:28 compute-0 nova_compute[348325]: 2025-12-03 18:54:28.611 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:54:28 compute-0 nova_compute[348325]: 2025-12-03 18:54:28.614 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:54:28 compute-0 nova_compute[348325]: 2025-12-03 18:54:28.614 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:54:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:54:28 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:54:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:54:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:54:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:54:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:54:28 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b450e37c-74ab-438e-94b3-4fefa31c5bca does not exist
Dec  3 18:54:28 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d8f13879-4741-4aa6-bb48-835b5f702173 does not exist
Dec  3 18:54:28 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e82e689d-15e0-4885-94a8-a72419a18293 does not exist
Dec  3 18:54:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:54:28 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:54:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:54:28 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:54:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:54:28 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:54:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:29 compute-0 podman[158200]: time="2025-12-03T18:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:54:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:54:29 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:54:29 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:54:29 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:54:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8652 "" "Go-http-client/1.1"
Dec  3 18:54:29 compute-0 podman[436856]: 2025-12-03 18:54:29.794092801 +0000 UTC m=+0.082228858 container create cbb388cbd278c895cbd9de5d3f65727febf512ae44d6611db35adcca2e3a5215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_spence, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:54:29 compute-0 systemd[1]: Started libpod-conmon-cbb388cbd278c895cbd9de5d3f65727febf512ae44d6611db35adcca2e3a5215.scope.
Dec  3 18:54:29 compute-0 podman[436856]: 2025-12-03 18:54:29.762434588 +0000 UTC m=+0.050570655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:54:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:54:29 compute-0 podman[436856]: 2025-12-03 18:54:29.911920748 +0000 UTC m=+0.200056835 container init cbb388cbd278c895cbd9de5d3f65727febf512ae44d6611db35adcca2e3a5215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_spence, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:54:29 compute-0 podman[436856]: 2025-12-03 18:54:29.92347496 +0000 UTC m=+0.211610997 container start cbb388cbd278c895cbd9de5d3f65727febf512ae44d6611db35adcca2e3a5215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:54:29 compute-0 podman[436856]: 2025-12-03 18:54:29.928415591 +0000 UTC m=+0.216551668 container attach cbb388cbd278c895cbd9de5d3f65727febf512ae44d6611db35adcca2e3a5215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_spence, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:54:29 compute-0 magical_spence[436872]: 167 167
Dec  3 18:54:29 compute-0 systemd[1]: libpod-cbb388cbd278c895cbd9de5d3f65727febf512ae44d6611db35adcca2e3a5215.scope: Deactivated successfully.
Dec  3 18:54:29 compute-0 podman[436856]: 2025-12-03 18:54:29.932909531 +0000 UTC m=+0.221045608 container died cbb388cbd278c895cbd9de5d3f65727febf512ae44d6611db35adcca2e3a5215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_spence, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:54:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff1d5159114b0915af913beea2b7c5991f12bc47662f367d667017891c6e9815-merged.mount: Deactivated successfully.
Dec  3 18:54:30 compute-0 podman[436856]: 2025-12-03 18:54:30.009272104 +0000 UTC m=+0.297408171 container remove cbb388cbd278c895cbd9de5d3f65727febf512ae44d6611db35adcca2e3a5215 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 18:54:30 compute-0 systemd[1]: libpod-conmon-cbb388cbd278c895cbd9de5d3f65727febf512ae44d6611db35adcca2e3a5215.scope: Deactivated successfully.
Dec  3 18:54:30 compute-0 podman[436896]: 2025-12-03 18:54:30.270070423 +0000 UTC m=+0.081078721 container create 3b8c7af8c352764fcc2fd7583f0e01b892a9a012382d7114108173bf904dc929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:54:30 compute-0 podman[436896]: 2025-12-03 18:54:30.238609444 +0000 UTC m=+0.049617842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:54:30 compute-0 systemd[1]: Started libpod-conmon-3b8c7af8c352764fcc2fd7583f0e01b892a9a012382d7114108173bf904dc929.scope.
Dec  3 18:54:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f434595832e2e8b43a2feab2b6a0302171473762411040473c51f6defe68ab4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f434595832e2e8b43a2feab2b6a0302171473762411040473c51f6defe68ab4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f434595832e2e8b43a2feab2b6a0302171473762411040473c51f6defe68ab4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f434595832e2e8b43a2feab2b6a0302171473762411040473c51f6defe68ab4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f434595832e2e8b43a2feab2b6a0302171473762411040473c51f6defe68ab4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:30 compute-0 podman[436896]: 2025-12-03 18:54:30.422940295 +0000 UTC m=+0.233948613 container init 3b8c7af8c352764fcc2fd7583f0e01b892a9a012382d7114108173bf904dc929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:54:30 compute-0 podman[436896]: 2025-12-03 18:54:30.449519194 +0000 UTC m=+0.260527482 container start 3b8c7af8c352764fcc2fd7583f0e01b892a9a012382d7114108173bf904dc929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:54:30 compute-0 podman[436896]: 2025-12-03 18:54:30.453751527 +0000 UTC m=+0.264759855 container attach 3b8c7af8c352764fcc2fd7583f0e01b892a9a012382d7114108173bf904dc929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 18:54:30 compute-0 nova_compute[348325]: 2025-12-03 18:54:30.531 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:31 compute-0 openstack_network_exporter[365222]: ERROR   18:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:54:31 compute-0 openstack_network_exporter[365222]: ERROR   18:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:54:31 compute-0 openstack_network_exporter[365222]: ERROR   18:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:54:31 compute-0 openstack_network_exporter[365222]: ERROR   18:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:54:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:54:31 compute-0 openstack_network_exporter[365222]: ERROR   18:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:54:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:54:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:31 compute-0 elastic_khayyam[436911]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:54:31 compute-0 elastic_khayyam[436911]: --> relative data size: 1.0
Dec  3 18:54:31 compute-0 elastic_khayyam[436911]: --> All data devices are unavailable
Dec  3 18:54:31 compute-0 systemd[1]: libpod-3b8c7af8c352764fcc2fd7583f0e01b892a9a012382d7114108173bf904dc929.scope: Deactivated successfully.
Dec  3 18:54:31 compute-0 systemd[1]: libpod-3b8c7af8c352764fcc2fd7583f0e01b892a9a012382d7114108173bf904dc929.scope: Consumed 1.139s CPU time.
Dec  3 18:54:31 compute-0 podman[436940]: 2025-12-03 18:54:31.749575604 +0000 UTC m=+0.047643374 container died 3b8c7af8c352764fcc2fd7583f0e01b892a9a012382d7114108173bf904dc929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:54:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f434595832e2e8b43a2feab2b6a0302171473762411040473c51f6defe68ab4-merged.mount: Deactivated successfully.
Dec  3 18:54:31 compute-0 podman[436940]: 2025-12-03 18:54:31.835928163 +0000 UTC m=+0.133995933 container remove 3b8c7af8c352764fcc2fd7583f0e01b892a9a012382d7114108173bf904dc929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_khayyam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:54:31 compute-0 systemd[1]: libpod-conmon-3b8c7af8c352764fcc2fd7583f0e01b892a9a012382d7114108173bf904dc929.scope: Deactivated successfully.
Dec  3 18:54:32 compute-0 nova_compute[348325]: 2025-12-03 18:54:32.607 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:32 compute-0 podman[437093]: 2025-12-03 18:54:32.68692632 +0000 UTC m=+0.053500437 container create e1e0d9540c95a6f33c9ecfb9ed9edb5a5fc5851369e225f03a298c012ac8e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:54:32 compute-0 systemd[1]: Started libpod-conmon-e1e0d9540c95a6f33c9ecfb9ed9edb5a5fc5851369e225f03a298c012ac8e6b1.scope.
Dec  3 18:54:32 compute-0 podman[437093]: 2025-12-03 18:54:32.668035818 +0000 UTC m=+0.034609955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:54:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:54:32 compute-0 podman[437093]: 2025-12-03 18:54:32.806256873 +0000 UTC m=+0.172831070 container init e1e0d9540c95a6f33c9ecfb9ed9edb5a5fc5851369e225f03a298c012ac8e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curie, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:54:32 compute-0 podman[437093]: 2025-12-03 18:54:32.819861446 +0000 UTC m=+0.186435563 container start e1e0d9540c95a6f33c9ecfb9ed9edb5a5fc5851369e225f03a298c012ac8e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:54:32 compute-0 podman[437093]: 2025-12-03 18:54:32.824784386 +0000 UTC m=+0.191358593 container attach e1e0d9540c95a6f33c9ecfb9ed9edb5a5fc5851369e225f03a298c012ac8e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:54:32 compute-0 zen_curie[437109]: 167 167
Dec  3 18:54:32 compute-0 systemd[1]: libpod-e1e0d9540c95a6f33c9ecfb9ed9edb5a5fc5851369e225f03a298c012ac8e6b1.scope: Deactivated successfully.
Dec  3 18:54:32 compute-0 podman[437093]: 2025-12-03 18:54:32.830658989 +0000 UTC m=+0.197233106 container died e1e0d9540c95a6f33c9ecfb9ed9edb5a5fc5851369e225f03a298c012ac8e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:54:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-81413a1c0438a7b05639374df47f35d5db91b2fd4e46fee75c2e0ad57b7e5bfd-merged.mount: Deactivated successfully.
Dec  3 18:54:32 compute-0 podman[437093]: 2025-12-03 18:54:32.887904777 +0000 UTC m=+0.254478894 container remove e1e0d9540c95a6f33c9ecfb9ed9edb5a5fc5851369e225f03a298c012ac8e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:54:32 compute-0 systemd[1]: libpod-conmon-e1e0d9540c95a6f33c9ecfb9ed9edb5a5fc5851369e225f03a298c012ac8e6b1.scope: Deactivated successfully.
Dec  3 18:54:33 compute-0 podman[437132]: 2025-12-03 18:54:33.116798076 +0000 UTC m=+0.081110432 container create 5c1cdf88e34a329562bdc12c4d7bfc355c338e6663b74c2b5ee23c963d187ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:54:33 compute-0 podman[437132]: 2025-12-03 18:54:33.090917563 +0000 UTC m=+0.055229939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:54:33 compute-0 systemd[1]: Started libpod-conmon-5c1cdf88e34a329562bdc12c4d7bfc355c338e6663b74c2b5ee23c963d187ef1.scope.
Dec  3 18:54:33 compute-0 nova_compute[348325]: 2025-12-03 18:54:33.226 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5aaff04023ec6dba00b5577c96bcd88f2d4698c787c88549b2930a059dce63d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5aaff04023ec6dba00b5577c96bcd88f2d4698c787c88549b2930a059dce63d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5aaff04023ec6dba00b5577c96bcd88f2d4698c787c88549b2930a059dce63d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5aaff04023ec6dba00b5577c96bcd88f2d4698c787c88549b2930a059dce63d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:33 compute-0 podman[437132]: 2025-12-03 18:54:33.273129292 +0000 UTC m=+0.237441628 container init 5c1cdf88e34a329562bdc12c4d7bfc355c338e6663b74c2b5ee23c963d187ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:54:33 compute-0 podman[437132]: 2025-12-03 18:54:33.292374472 +0000 UTC m=+0.256686788 container start 5c1cdf88e34a329562bdc12c4d7bfc355c338e6663b74c2b5ee23c963d187ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 18:54:33 compute-0 podman[437132]: 2025-12-03 18:54:33.297100908 +0000 UTC m=+0.261413294 container attach 5c1cdf88e34a329562bdc12c4d7bfc355c338e6663b74c2b5ee23c963d187ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:54:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:34 compute-0 quirky_tu[437146]: {
Dec  3 18:54:34 compute-0 quirky_tu[437146]:    "0": [
Dec  3 18:54:34 compute-0 quirky_tu[437146]:        {
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "devices": [
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "/dev/loop3"
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            ],
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_name": "ceph_lv0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_size": "21470642176",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "name": "ceph_lv0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "tags": {
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.cluster_name": "ceph",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.crush_device_class": "",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.encrypted": "0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.osd_id": "0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.type": "block",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.vdo": "0"
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            },
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "type": "block",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "vg_name": "ceph_vg0"
Dec  3 18:54:34 compute-0 quirky_tu[437146]:        }
Dec  3 18:54:34 compute-0 quirky_tu[437146]:    ],
Dec  3 18:54:34 compute-0 quirky_tu[437146]:    "1": [
Dec  3 18:54:34 compute-0 quirky_tu[437146]:        {
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "devices": [
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "/dev/loop4"
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            ],
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_name": "ceph_lv1",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_size": "21470642176",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "name": "ceph_lv1",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "tags": {
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.cluster_name": "ceph",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.crush_device_class": "",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.encrypted": "0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.osd_id": "1",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.type": "block",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.vdo": "0"
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            },
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "type": "block",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "vg_name": "ceph_vg1"
Dec  3 18:54:34 compute-0 quirky_tu[437146]:        }
Dec  3 18:54:34 compute-0 quirky_tu[437146]:    ],
Dec  3 18:54:34 compute-0 quirky_tu[437146]:    "2": [
Dec  3 18:54:34 compute-0 quirky_tu[437146]:        {
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "devices": [
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "/dev/loop5"
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            ],
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_name": "ceph_lv2",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_size": "21470642176",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "name": "ceph_lv2",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "tags": {
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.cluster_name": "ceph",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.crush_device_class": "",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.encrypted": "0",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.osd_id": "2",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.type": "block",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:                "ceph.vdo": "0"
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            },
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "type": "block",
Dec  3 18:54:34 compute-0 quirky_tu[437146]:            "vg_name": "ceph_vg2"
Dec  3 18:54:34 compute-0 quirky_tu[437146]:        }
Dec  3 18:54:34 compute-0 quirky_tu[437146]:    ]
Dec  3 18:54:34 compute-0 quirky_tu[437146]: }
Dec  3 18:54:34 compute-0 systemd[1]: libpod-5c1cdf88e34a329562bdc12c4d7bfc355c338e6663b74c2b5ee23c963d187ef1.scope: Deactivated successfully.
Dec  3 18:54:34 compute-0 podman[437155]: 2025-12-03 18:54:34.178402704 +0000 UTC m=+0.036971904 container died 5c1cdf88e34a329562bdc12c4d7bfc355c338e6663b74c2b5ee23c963d187ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:54:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5aaff04023ec6dba00b5577c96bcd88f2d4698c787c88549b2930a059dce63d-merged.mount: Deactivated successfully.
Dec  3 18:54:34 compute-0 podman[437155]: 2025-12-03 18:54:34.239042965 +0000 UTC m=+0.097612104 container remove 5c1cdf88e34a329562bdc12c4d7bfc355c338e6663b74c2b5ee23c963d187ef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tu, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 18:54:34 compute-0 systemd[1]: libpod-conmon-5c1cdf88e34a329562bdc12c4d7bfc355c338e6663b74c2b5ee23c963d187ef1.scope: Deactivated successfully.
Dec  3 18:54:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 18:54:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 7427 writes, 29K keys, 7427 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7427 writes, 1646 syncs, 4.51 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 729 writes, 2111 keys, 729 commit groups, 1.0 writes per commit group, ingest: 1.09 MB, 0.00 MB/s#012Interval WAL: 729 writes, 334 syncs, 2.18 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 18:54:35 compute-0 podman[437310]: 2025-12-03 18:54:35.209362435 +0000 UTC m=+0.069555598 container create 4afb949609db9df92599e1088d44c732ce5f0992ef5b311bd034602ffa3c068b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:54:35 compute-0 systemd[1]: Started libpod-conmon-4afb949609db9df92599e1088d44c732ce5f0992ef5b311bd034602ffa3c068b.scope.
Dec  3 18:54:35 compute-0 podman[437310]: 2025-12-03 18:54:35.184941549 +0000 UTC m=+0.045134732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:54:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:54:35 compute-0 podman[437310]: 2025-12-03 18:54:35.329925079 +0000 UTC m=+0.190118282 container init 4afb949609db9df92599e1088d44c732ce5f0992ef5b311bd034602ffa3c068b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:54:35 compute-0 podman[437310]: 2025-12-03 18:54:35.341007359 +0000 UTC m=+0.201200522 container start 4afb949609db9df92599e1088d44c732ce5f0992ef5b311bd034602ffa3c068b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 18:54:35 compute-0 podman[437310]: 2025-12-03 18:54:35.345228422 +0000 UTC m=+0.205421635 container attach 4afb949609db9df92599e1088d44c732ce5f0992ef5b311bd034602ffa3c068b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 18:54:35 compute-0 agitated_mestorf[437326]: 167 167
Dec  3 18:54:35 compute-0 systemd[1]: libpod-4afb949609db9df92599e1088d44c732ce5f0992ef5b311bd034602ffa3c068b.scope: Deactivated successfully.
Dec  3 18:54:35 compute-0 conmon[437326]: conmon 4afb949609db9df92599 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4afb949609db9df92599e1088d44c732ce5f0992ef5b311bd034602ffa3c068b.scope/container/memory.events
Dec  3 18:54:35 compute-0 podman[437310]: 2025-12-03 18:54:35.351972437 +0000 UTC m=+0.212165610 container died 4afb949609db9df92599e1088d44c732ce5f0992ef5b311bd034602ffa3c068b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 18:54:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1db80adf5437d0e67191674306a4c12c94b20d0a9cadb0b0b3bb72387c43255d-merged.mount: Deactivated successfully.
Dec  3 18:54:35 compute-0 podman[437310]: 2025-12-03 18:54:35.40736779 +0000 UTC m=+0.267560953 container remove 4afb949609db9df92599e1088d44c732ce5f0992ef5b311bd034602ffa3c068b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:54:35 compute-0 systemd[1]: libpod-conmon-4afb949609db9df92599e1088d44c732ce5f0992ef5b311bd034602ffa3c068b.scope: Deactivated successfully.
Dec  3 18:54:35 compute-0 nova_compute[348325]: 2025-12-03 18:54:35.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:54:35 compute-0 nova_compute[348325]: 2025-12-03 18:54:35.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 18:54:35 compute-0 nova_compute[348325]: 2025-12-03 18:54:35.533 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:35 compute-0 podman[437349]: 2025-12-03 18:54:35.604541654 +0000 UTC m=+0.060346935 container create aff0f684f8af2d911d49a1cffbbdf247ad3393550d7cdf5bd9d135359d589f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Dec  3 18:54:35 compute-0 podman[437349]: 2025-12-03 18:54:35.576865658 +0000 UTC m=+0.032670979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:54:35 compute-0 systemd[1]: Started libpod-conmon-aff0f684f8af2d911d49a1cffbbdf247ad3393550d7cdf5bd9d135359d589f5b.scope.
Dec  3 18:54:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:54:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae865b4567c2ca49753df8564593cebb2e3d87847d4b61ecea2972df2f9e678a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae865b4567c2ca49753df8564593cebb2e3d87847d4b61ecea2972df2f9e678a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae865b4567c2ca49753df8564593cebb2e3d87847d4b61ecea2972df2f9e678a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae865b4567c2ca49753df8564593cebb2e3d87847d4b61ecea2972df2f9e678a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:54:35 compute-0 podman[437349]: 2025-12-03 18:54:35.743521107 +0000 UTC m=+0.199326428 container init aff0f684f8af2d911d49a1cffbbdf247ad3393550d7cdf5bd9d135359d589f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 18:54:35 compute-0 podman[437349]: 2025-12-03 18:54:35.762818649 +0000 UTC m=+0.218623960 container start aff0f684f8af2d911d49a1cffbbdf247ad3393550d7cdf5bd9d135359d589f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:54:35 compute-0 podman[437349]: 2025-12-03 18:54:35.769275566 +0000 UTC m=+0.225080867 container attach aff0f684f8af2d911d49a1cffbbdf247ad3393550d7cdf5bd9d135359d589f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]: {
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "osd_id": 1,
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "type": "bluestore"
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:    },
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "osd_id": 2,
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "type": "bluestore"
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:    },
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "osd_id": 0,
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:        "type": "bluestore"
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]:    }
Dec  3 18:54:36 compute-0 xenodochial_lamport[437365]: }
Dec  3 18:54:36 compute-0 systemd[1]: libpod-aff0f684f8af2d911d49a1cffbbdf247ad3393550d7cdf5bd9d135359d589f5b.scope: Deactivated successfully.
Dec  3 18:54:36 compute-0 podman[437349]: 2025-12-03 18:54:36.917794068 +0000 UTC m=+1.373599349 container died aff0f684f8af2d911d49a1cffbbdf247ad3393550d7cdf5bd9d135359d589f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:54:36 compute-0 systemd[1]: libpod-aff0f684f8af2d911d49a1cffbbdf247ad3393550d7cdf5bd9d135359d589f5b.scope: Consumed 1.153s CPU time.
Dec  3 18:54:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae865b4567c2ca49753df8564593cebb2e3d87847d4b61ecea2972df2f9e678a-merged.mount: Deactivated successfully.
Dec  3 18:54:37 compute-0 podman[437349]: 2025-12-03 18:54:37.002110726 +0000 UTC m=+1.457915997 container remove aff0f684f8af2d911d49a1cffbbdf247ad3393550d7cdf5bd9d135359d589f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 18:54:37 compute-0 systemd[1]: libpod-conmon-aff0f684f8af2d911d49a1cffbbdf247ad3393550d7cdf5bd9d135359d589f5b.scope: Deactivated successfully.
Dec  3 18:54:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:54:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:54:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:54:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:54:37 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 68d14595-2573-470e-b8b8-bc79d1a186fc does not exist
Dec  3 18:54:37 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 1be22543-3583-4dd7-949a-566d834c46b0 does not exist
Dec  3 18:54:37 compute-0 ceph-mgr[193091]: [devicehealth INFO root] Check health
Dec  3 18:54:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:54:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132134988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:54:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:54:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132134988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:54:38 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:54:38 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:54:38 compute-0 nova_compute[348325]: 2025-12-03 18:54:38.229 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:40 compute-0 nova_compute[348325]: 2025-12-03 18:54:40.537 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:40 compute-0 podman[437460]: 2025-12-03 18:54:40.948823075 +0000 UTC m=+0.101207032 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 18:54:40 compute-0 podman[437459]: 2025-12-03 18:54:40.951290445 +0000 UTC m=+0.100065054 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:54:40 compute-0 podman[437458]: 2025-12-03 18:54:40.955321283 +0000 UTC m=+0.105183178 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 18:54:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:43 compute-0 nova_compute[348325]: 2025-12-03 18:54:43.231 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:43 compute-0 podman[437522]: 2025-12-03 18:54:43.962853988 +0000 UTC m=+0.098506565 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 18:54:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:54:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:54:43 compute-0 podman[437523]: 2025-12-03 18:54:43.968949248 +0000 UTC m=+0.099955402 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 18:54:43 compute-0 podman[437521]: 2025-12-03 18:54:43.97642217 +0000 UTC m=+0.120136684 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9)
Dec  3 18:54:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:54:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:54:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:54:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:54:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.142 348329 DEBUG nova.compute.manager [req-b961684d-7ae1-4608-88df-ee574a1fc19b req-06f829e5-9d68-4685-98fa-6a1d12391642 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Received event network-changed-bdba7a40-8840-4832-a614-279c23eb82ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.143 348329 DEBUG nova.compute.manager [req-b961684d-7ae1-4608-88df-ee574a1fc19b req-06f829e5-9d68-4685-98fa-6a1d12391642 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Refreshing instance network info cache due to event network-changed-bdba7a40-8840-4832-a614-279c23eb82ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.144 348329 DEBUG oslo_concurrency.lockutils [req-b961684d-7ae1-4608-88df-ee574a1fc19b req-06f829e5-9d68-4685-98fa-6a1d12391642 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.145 348329 DEBUG oslo_concurrency.lockutils [req-b961684d-7ae1-4608-88df-ee574a1fc19b req-06f829e5-9d68-4685-98fa-6a1d12391642 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.146 348329 DEBUG nova.network.neutron [req-b961684d-7ae1-4608-88df-ee574a1fc19b req-06f829e5-9d68-4685-98fa-6a1d12391642 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Refreshing network info cache for port bdba7a40-8840-4832-a614-279c23eb82ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.371 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.373 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.373 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.539 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.548 348329 DEBUG oslo_concurrency.lockutils [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.549 348329 DEBUG oslo_concurrency.lockutils [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.549 348329 DEBUG oslo_concurrency.lockutils [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.549 348329 DEBUG oslo_concurrency.lockutils [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.549 348329 DEBUG oslo_concurrency.lockutils [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.551 348329 INFO nova.compute.manager [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Terminating instance#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.551 348329 DEBUG nova.compute.manager [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 18:54:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:54:45 compute-0 kernel: tapbdba7a40-88 (unregistering): left promiscuous mode
Dec  3 18:54:45 compute-0 NetworkManager[49087]: <info>  [1764788085.7311] device (tapbdba7a40-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 18:54:45 compute-0 ovn_controller[89305]: 2025-12-03T18:54:45Z|00058|binding|INFO|Releasing lport bdba7a40-8840-4832-a614-279c23eb82ca from this chassis (sb_readonly=0)
Dec  3 18:54:45 compute-0 ovn_controller[89305]: 2025-12-03T18:54:45Z|00059|binding|INFO|Setting lport bdba7a40-8840-4832-a614-279c23eb82ca down in Southbound
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.743 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:45 compute-0 ovn_controller[89305]: 2025-12-03T18:54:45Z|00060|binding|INFO|Removing iface tapbdba7a40-88 ovn-installed in OVS
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.747 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.755 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:41:b2 192.168.0.189'], port_security=['fa:16:3e:93:41:b2 192.168.0.189'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-4jfpq66btob3-zeembfmsdvyd-qc6d57h54o3l-port-r6aaxu66huxr', 'neutron:cidrs': '192.168.0.189/24', 'neutron:device_id': 'a6019a9c-c065-49d8-bef3-219bd2c79d8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-4jfpq66btob3-zeembfmsdvyd-qc6d57h54o3l-port-r6aaxu66huxr', 'neutron:project_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e48052e-a2fd-4fc1-8ebd-22e3b6e0bd66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12999ead-9a54-49b3-a532-a5f8bdddaf16, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=bdba7a40-8840-4832-a614-279c23eb82ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.759 286999 INFO neutron.agent.ovn.metadata.agent [-] Port bdba7a40-8840-4832-a614-279c23eb82ca in datapath 85c8d446-ad7f-4d1b-a311-89b0b07e8aad unbound from our chassis#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.764 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85c8d446-ad7f-4d1b-a311-89b0b07e8aad#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.778 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.785 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[5c774705-4f0c-41ef-b488-2d0d5ab7314f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.824 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[c3566360-e795-4c3c-9ff9-697c47ec5698]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.827 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[440440ac-fd40-4743-9685-df0b2234c966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:54:45 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec  3 18:54:45 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 57.361s CPU time.
Dec  3 18:54:45 compute-0 systemd-machined[138702]: Machine qemu-4-instance-00000004 terminated.
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.858 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[3eddc507-6d10-41ec-a6aa-62d12f348101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.872 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ec6ec0-45e1-4304-9c46-21f0df75cc86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85c8d446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:c1:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 15, 'rx_bytes': 574, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 15, 'rx_bytes': 574, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527503, 'reachable_time': 37621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 437585, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.890 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[96bfd8da-3cd3-461f-9454-54c8f2b9739d]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527519, 'tstamp': 527519}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 437586, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85c8d446-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527523, 'tstamp': 527523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 437586, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.893 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85c8d446-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.895 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.901 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.903 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85c8d446-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.904 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.905 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85c8d446-a0, col_values=(('external_ids', {'iface-id': '4db8340d-afa3-4a82-bd51-bca0a752f53f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:54:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:45.906 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.976 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:45 compute-0 nova_compute[348325]: 2025-12-03 18:54:45.986 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.000 348329 INFO nova.virt.libvirt.driver [-] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Instance destroyed successfully.#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.000 348329 DEBUG nova.objects.instance [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'resources' on Instance uuid a6019a9c-c065-49d8-bef3-219bd2c79d8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.013 348329 DEBUG nova.virt.libvirt.vif [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:44:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-66btob3-zeembfmsdvyd-qc6d57h54o3l-vnf-m24sgrg35czm',id=4,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:44:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b322e118-e1cc-40be-8d8c-553648144092'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-xha30ma8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:44:35Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM0OTkyMjMyNzQwOTEwMTMyOTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzQ5OTIyMzI3NDA5MTAxMzI5OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM0OTkyMjMyNzQwOTEwMTMyOTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  3 18:54:46 compute-0 nova_compute[348325]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzQ5OTIyMzI3NDA5MTAxMzI5OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM0OTkyMjMyNzQwOTEwMTMyOTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNDk5MjIzMjc0MDkxMDEzMjk4PT0tLQo=',user_id='56338958b09445f5af9aa9e4601a1a8a',uuid=a6019a9c-c065-49d8-bef3-219bd2c79d8c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.014 348329 DEBUG nova.network.os_vif_util [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.015 348329 DEBUG nova.network.os_vif_util [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:41:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdba7a40-8840-4832-a614-279c23eb82ca,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbdba7a40-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.015 348329 DEBUG os_vif [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:41:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdba7a40-8840-4832-a614-279c23eb82ca,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbdba7a40-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.018 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.019 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdba7a40-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.024 348329 DEBUG nova.compute.manager [req-53b779d0-1f2f-4088-acc8-8169c5c37957 req-a63ac6cb-abe0-494e-8848-7f7f11932d82 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Received event network-vif-unplugged-bdba7a40-8840-4832-a614-279c23eb82ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.024 348329 DEBUG oslo_concurrency.lockutils [req-53b779d0-1f2f-4088-acc8-8169c5c37957 req-a63ac6cb-abe0-494e-8848-7f7f11932d82 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.025 348329 DEBUG oslo_concurrency.lockutils [req-53b779d0-1f2f-4088-acc8-8169c5c37957 req-a63ac6cb-abe0-494e-8848-7f7f11932d82 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.025 348329 DEBUG oslo_concurrency.lockutils [req-53b779d0-1f2f-4088-acc8-8169c5c37957 req-a63ac6cb-abe0-494e-8848-7f7f11932d82 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.026 348329 DEBUG nova.compute.manager [req-53b779d0-1f2f-4088-acc8-8169c5c37957 req-a63ac6cb-abe0-494e-8848-7f7f11932d82 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] No waiting events found dispatching network-vif-unplugged-bdba7a40-8840-4832-a614-279c23eb82ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.026 348329 DEBUG nova.compute.manager [req-53b779d0-1f2f-4088-acc8-8169c5c37957 req-a63ac6cb-abe0-494e-8848-7f7f11932d82 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Received event network-vif-unplugged-bdba7a40-8840-4832-a614-279c23eb82ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.027 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.029 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.032 348329 INFO os_vif [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:41:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdba7a40-8840-4832-a614-279c23eb82ca,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbdba7a40-88')#033[00m
Dec  3 18:54:46 compute-0 rsyslogd[188590]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 18:54:46.013 348329 DEBUG nova.virt.libvirt.vif [None req-7ff46d59-0c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.585 348329 DEBUG nova.network.neutron [req-b961684d-7ae1-4608-88df-ee574a1fc19b req-06f829e5-9d68-4685-98fa-6a1d12391642 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Updated VIF entry in instance network info cache for port bdba7a40-8840-4832-a614-279c23eb82ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.586 348329 DEBUG nova.network.neutron [req-b961684d-7ae1-4608-88df-ee574a1fc19b req-06f829e5-9d68-4685-98fa-6a1d12391642 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Updating instance_info_cache with network_info: [{"id": "bdba7a40-8840-4832-a614-279c23eb82ca", "address": "fa:16:3e:93:41:b2", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdba7a40-88", "ovs_interfaceid": "bdba7a40-8840-4832-a614-279c23eb82ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:54:46 compute-0 nova_compute[348325]: 2025-12-03 18:54:46.619 348329 DEBUG oslo_concurrency.lockutils [req-b961684d-7ae1-4608-88df-ee574a1fc19b req-06f829e5-9d68-4685-98fa-6a1d12391642 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-a6019a9c-c065-49d8-bef3-219bd2c79d8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:54:47 compute-0 nova_compute[348325]: 2025-12-03 18:54:47.151 348329 INFO nova.virt.libvirt.driver [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Deleting instance files /var/lib/nova/instances/a6019a9c-c065-49d8-bef3-219bd2c79d8c_del#033[00m
Dec  3 18:54:47 compute-0 nova_compute[348325]: 2025-12-03 18:54:47.151 348329 INFO nova.virt.libvirt.driver [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Deletion of /var/lib/nova/instances/a6019a9c-c065-49d8-bef3-219bd2c79d8c_del complete#033[00m
Dec  3 18:54:47 compute-0 nova_compute[348325]: 2025-12-03 18:54:47.268 348329 INFO nova.compute.manager [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Took 1.72 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 18:54:47 compute-0 nova_compute[348325]: 2025-12-03 18:54:47.269 348329 DEBUG oslo.service.loopingcall [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 18:54:47 compute-0 nova_compute[348325]: 2025-12-03 18:54:47.270 348329 DEBUG nova.compute.manager [-] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 18:54:47 compute-0 nova_compute[348325]: 2025-12-03 18:54:47.270 348329 DEBUG nova.network.neutron [-] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 18:54:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s
Dec  3 18:54:48 compute-0 nova_compute[348325]: 2025-12-03 18:54:48.109 348329 DEBUG nova.compute.manager [req-78a29804-fe38-4774-89ec-abbc0fc5ba89 req-cd2059e4-51c4-4fdb-ba2e-39eb72f901d0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Received event network-vif-plugged-bdba7a40-8840-4832-a614-279c23eb82ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:54:48 compute-0 nova_compute[348325]: 2025-12-03 18:54:48.109 348329 DEBUG oslo_concurrency.lockutils [req-78a29804-fe38-4774-89ec-abbc0fc5ba89 req-cd2059e4-51c4-4fdb-ba2e-39eb72f901d0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:54:48 compute-0 nova_compute[348325]: 2025-12-03 18:54:48.110 348329 DEBUG oslo_concurrency.lockutils [req-78a29804-fe38-4774-89ec-abbc0fc5ba89 req-cd2059e4-51c4-4fdb-ba2e-39eb72f901d0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:54:48 compute-0 nova_compute[348325]: 2025-12-03 18:54:48.110 348329 DEBUG oslo_concurrency.lockutils [req-78a29804-fe38-4774-89ec-abbc0fc5ba89 req-cd2059e4-51c4-4fdb-ba2e-39eb72f901d0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:54:48 compute-0 nova_compute[348325]: 2025-12-03 18:54:48.110 348329 DEBUG nova.compute.manager [req-78a29804-fe38-4774-89ec-abbc0fc5ba89 req-cd2059e4-51c4-4fdb-ba2e-39eb72f901d0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] No waiting events found dispatching network-vif-plugged-bdba7a40-8840-4832-a614-279c23eb82ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:54:48 compute-0 nova_compute[348325]: 2025-12-03 18:54:48.111 348329 WARNING nova.compute.manager [req-78a29804-fe38-4774-89ec-abbc0fc5ba89 req-cd2059e4-51c4-4fdb-ba2e-39eb72f901d0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Received unexpected event network-vif-plugged-bdba7a40-8840-4832-a614-279c23eb82ca for instance with vm_state active and task_state deleting.#033[00m
Dec  3 18:54:48 compute-0 nova_compute[348325]: 2025-12-03 18:54:48.235 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:49 compute-0 nova_compute[348325]: 2025-12-03 18:54:49.071 348329 DEBUG nova.network.neutron [-] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:54:49 compute-0 nova_compute[348325]: 2025-12-03 18:54:49.102 348329 INFO nova.compute.manager [-] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Took 1.83 seconds to deallocate network for instance.#033[00m
Dec  3 18:54:49 compute-0 nova_compute[348325]: 2025-12-03 18:54:49.140 348329 DEBUG oslo_concurrency.lockutils [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:54:49 compute-0 nova_compute[348325]: 2025-12-03 18:54:49.141 348329 DEBUG oslo_concurrency.lockutils [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:54:49 compute-0 nova_compute[348325]: 2025-12-03 18:54:49.217 348329 DEBUG oslo_concurrency.processutils [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:54:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 126 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 255 B/s wr, 19 op/s
Dec  3 18:54:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:54:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2846116983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:54:49 compute-0 nova_compute[348325]: 2025-12-03 18:54:49.698 348329 DEBUG oslo_concurrency.processutils [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:54:49 compute-0 nova_compute[348325]: 2025-12-03 18:54:49.709 348329 DEBUG nova.compute.provider_tree [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:54:49 compute-0 nova_compute[348325]: 2025-12-03 18:54:49.746 348329 DEBUG nova.scheduler.client.report [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:54:49 compute-0 nova_compute[348325]: 2025-12-03 18:54:49.776 348329 DEBUG oslo_concurrency.lockutils [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:54:49 compute-0 nova_compute[348325]: 2025-12-03 18:54:49.821 348329 INFO nova.scheduler.client.report [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Deleted allocations for instance a6019a9c-c065-49d8-bef3-219bd2c79d8c#033[00m
Dec  3 18:54:49 compute-0 nova_compute[348325]: 2025-12-03 18:54:49.912 348329 DEBUG oslo_concurrency.lockutils [None req-7ff46d59-0c8d-42d4-bb07-d11759cc84f1 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "a6019a9c-c065-49d8-bef3-219bd2c79d8c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:54:51 compute-0 nova_compute[348325]: 2025-12-03 18:54:51.022 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:54:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:54:52.376 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:54:53 compute-0 nova_compute[348325]: 2025-12-03 18:54:53.237 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:54:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:54 compute-0 podman[437641]: 2025-12-03 18:54:54.96083294 +0000 UTC m=+0.113814280 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:54:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:54:56 compute-0 nova_compute[348325]: 2025-12-03 18:54:56.024 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:54:57 compute-0 podman[437664]: 2025-12-03 18:54:57.98366558 +0000 UTC m=+0.131421270 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Dec  3 18:54:57 compute-0 podman[437665]: 2025-12-03 18:54:57.985115196 +0000 UTC m=+0.130319243 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.4)
Dec  3 18:54:58 compute-0 nova_compute[348325]: 2025-12-03 18:54:58.239 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:54:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:54:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.5 KiB/s wr, 38 op/s
Dec  3 18:54:59 compute-0 podman[158200]: time="2025-12-03T18:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:54:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:54:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8645 "" "Go-http-client/1.1"
Dec  3 18:55:01 compute-0 nova_compute[348325]: 2025-12-03 18:55:00.999 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764788085.9968708, a6019a9c-c065-49d8-bef3-219bd2c79d8c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:55:01 compute-0 nova_compute[348325]: 2025-12-03 18:55:01.000 348329 INFO nova.compute.manager [-] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] VM Stopped (Lifecycle Event)#033[00m
Dec  3 18:55:01 compute-0 nova_compute[348325]: 2025-12-03 18:55:01.018 348329 DEBUG nova.compute.manager [None req-bc86ffa5-2455-4c09-b5c5-a1ca07325958 - - - - - -] [instance: a6019a9c-c065-49d8-bef3-219bd2c79d8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:55:01 compute-0 nova_compute[348325]: 2025-12-03 18:55:01.028 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:01 compute-0 openstack_network_exporter[365222]: ERROR   18:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:55:01 compute-0 openstack_network_exporter[365222]: ERROR   18:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:55:01 compute-0 openstack_network_exporter[365222]: ERROR   18:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:55:01 compute-0 openstack_network_exporter[365222]: ERROR   18:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:55:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:55:01 compute-0 openstack_network_exporter[365222]: ERROR   18:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:55:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:55:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Dec  3 18:55:03 compute-0 nova_compute[348325]: 2025-12-03 18:55:03.242 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.058 348329 DEBUG oslo_concurrency.lockutils [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "1ca1fbdb-089c-4544-821e-0542089b8424" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.059 348329 DEBUG oslo_concurrency.lockutils [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.059 348329 DEBUG oslo_concurrency.lockutils [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.060 348329 DEBUG oslo_concurrency.lockutils [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.060 348329 DEBUG oslo_concurrency.lockutils [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.062 348329 INFO nova.compute.manager [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Terminating instance#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.064 348329 DEBUG nova.compute.manager [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 18:55:05 compute-0 kernel: tap3d8505a1-5c (unregistering): left promiscuous mode
Dec  3 18:55:05 compute-0 NetworkManager[49087]: <info>  [1764788105.2022] device (tap3d8505a1-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.217 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:05 compute-0 ovn_controller[89305]: 2025-12-03T18:55:05Z|00061|binding|INFO|Releasing lport 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b from this chassis (sb_readonly=0)
Dec  3 18:55:05 compute-0 ovn_controller[89305]: 2025-12-03T18:55:05Z|00062|binding|INFO|Setting lport 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b down in Southbound
Dec  3 18:55:05 compute-0 ovn_controller[89305]: 2025-12-03T18:55:05Z|00063|binding|INFO|Removing iface tap3d8505a1-5c ovn-installed in OVS
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.224 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:1b:25 192.168.0.128'], port_security=['fa:16:3e:ea:1b:25 192.168.0.128'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.128/24', 'neutron:device_id': '1ca1fbdb-089c-4544-821e-0542089b8424', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2770200bdb2436c90142fa2e5ddcd47', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e48052e-a2fd-4fc1-8ebd-22e3b6e0bd66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12999ead-9a54-49b3-a532-a5f8bdddaf16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.226 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b in datapath 85c8d446-ad7f-4d1b-a311-89b0b07e8aad unbound from our chassis#033[00m
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.228 286999 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85c8d446-ad7f-4d1b-a311-89b0b07e8aad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.229 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a01090-0544-4039-8c0b-3962c213465f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.229 286999 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad namespace which is not needed anymore#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.246 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:05 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  3 18:55:05 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 1.797s CPU time.
Dec  3 18:55:05 compute-0 systemd-machined[138702]: Machine qemu-1-instance-00000001 terminated.
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.288 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.295 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.304 348329 INFO nova.virt.libvirt.driver [-] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Instance destroyed successfully.#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.305 348329 DEBUG nova.objects.instance [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lazy-loading 'resources' on Instance uuid 1ca1fbdb-089c-4544-821e-0542089b8424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.333 348329 DEBUG nova.virt.libvirt.vif [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:36:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d2770200bdb2436c90142fa2e5ddcd47',ramdisk_id='',reservation_id='r-aytiw8mr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='e68cd467-b4e6-45e0-8e55-984fda402294',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:36:32Z,user_data=None,user_id='56338958b09445f5af9aa9e4601a1a8a',uuid=1ca1fbdb-089c-4544-821e-0542089b8424,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.333 348329 DEBUG nova.network.os_vif_util [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converting VIF {"id": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "address": "fa:16:3e:ea:1b:25", "network": {"id": "85c8d446-ad7f-4d1b-a311-89b0b07e8aad", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.128", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2770200bdb2436c90142fa2e5ddcd47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d8505a1-5c", "ovs_interfaceid": "3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.334 348329 DEBUG nova.network.os_vif_util [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ea:1b:25,bridge_name='br-int',has_traffic_filtering=True,id=3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d8505a1-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.335 348329 DEBUG os_vif [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:1b:25,bridge_name='br-int',has_traffic_filtering=True,id=3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d8505a1-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.336 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.337 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d8505a1-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.338 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.340 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.342 348329 INFO os_vif [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:1b:25,bridge_name='br-int',has_traffic_filtering=True,id=3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b,network=Network(85c8d446-ad7f-4d1b-a311-89b0b07e8aad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d8505a1-5c')#033[00m
Dec  3 18:55:05 compute-0 neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad[411876]: [NOTICE]   (411880) : haproxy version is 2.8.14-c23fe91
Dec  3 18:55:05 compute-0 neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad[411876]: [NOTICE]   (411880) : path to executable is /usr/sbin/haproxy
Dec  3 18:55:05 compute-0 neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad[411876]: [WARNING]  (411880) : Exiting Master process...
Dec  3 18:55:05 compute-0 neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad[411876]: [ALERT]    (411880) : Current worker (411882) exited with code 143 (Terminated)
Dec  3 18:55:05 compute-0 neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad[411876]: [WARNING]  (411880) : All workers exited. Exiting... (0)
Dec  3 18:55:05 compute-0 systemd[1]: libpod-40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533.scope: Deactivated successfully.
Dec  3 18:55:05 compute-0 podman[437737]: 2025-12-03 18:55:05.407560432 +0000 UTC m=+0.065866239 container died 40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  3 18:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533-userdata-shm.mount: Deactivated successfully.
Dec  3 18:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0f361f35212d4696e56f07bf75702a4c2a7b158dc73201a706cc1cf8997c680-merged.mount: Deactivated successfully.
Dec  3 18:55:05 compute-0 podman[437737]: 2025-12-03 18:55:05.472151719 +0000 UTC m=+0.130457526 container cleanup 40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:55:05 compute-0 systemd[1]: libpod-conmon-40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533.scope: Deactivated successfully.
Dec  3 18:55:05 compute-0 podman[437782]: 2025-12-03 18:55:05.578878255 +0000 UTC m=+0.071800354 container remove 40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.586 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb39e8c-66b6-4a84-a69a-1c3b68c900aa]: (4, ('Wed Dec  3 06:55:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad (40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533)\n40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533\nWed Dec  3 06:55:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad (40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533)\n40df282ea2ef783ada208d3e16810b2eaf1c5942a628833f687be999a1612533\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.589 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f39f9c1c-bb26-46fe-9948-7cdb56f0f869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.593 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85c8d446-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.596 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:05 compute-0 kernel: tap85c8d446-a0: left promiscuous mode
Dec  3 18:55:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.611 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.614 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[408e8739-537b-43d1-b5bb-8112c0136bab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.635 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[0342c8f2-e410-4f7a-b171-db8e97921dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.636 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b50e05-8b35-4589-9de7-6401a542efcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.651 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[ff548cf6-0718-4281-84fb-9e5a75b317d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527490, 'reachable_time': 20735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 437797, 'error': None, 'target': 'ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:55:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d85c8d446\x2dad7f\x2d4d1b\x2da311\x2d89b0b07e8aad.mount: Deactivated successfully.
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.672 287110 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85c8d446-ad7f-4d1b-a311-89b0b07e8aad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 18:55:05 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:05.673 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[d95af705-8492-444b-8647-8b7293b33ac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.841 348329 DEBUG nova.compute.manager [req-6ef0c477-39f4-4e5e-b734-b05a8610afa8 req-01bd9913-fc17-40d6-af73-3f5c7c5e2347 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Received event network-vif-unplugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.842 348329 DEBUG oslo_concurrency.lockutils [req-6ef0c477-39f4-4e5e-b734-b05a8610afa8 req-01bd9913-fc17-40d6-af73-3f5c7c5e2347 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.842 348329 DEBUG oslo_concurrency.lockutils [req-6ef0c477-39f4-4e5e-b734-b05a8610afa8 req-01bd9913-fc17-40d6-af73-3f5c7c5e2347 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.842 348329 DEBUG oslo_concurrency.lockutils [req-6ef0c477-39f4-4e5e-b734-b05a8610afa8 req-01bd9913-fc17-40d6-af73-3f5c7c5e2347 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.842 348329 DEBUG nova.compute.manager [req-6ef0c477-39f4-4e5e-b734-b05a8610afa8 req-01bd9913-fc17-40d6-af73-3f5c7c5e2347 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] No waiting events found dispatching network-vif-unplugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:55:05 compute-0 nova_compute[348325]: 2025-12-03 18:55:05.843 348329 DEBUG nova.compute.manager [req-6ef0c477-39f4-4e5e-b734-b05a8610afa8 req-01bd9913-fc17-40d6-af73-3f5c7c5e2347 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Received event network-vif-unplugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 18:55:06 compute-0 nova_compute[348325]: 2025-12-03 18:55:06.148 348329 INFO nova.virt.libvirt.driver [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Deleting instance files /var/lib/nova/instances/1ca1fbdb-089c-4544-821e-0542089b8424_del#033[00m
Dec  3 18:55:06 compute-0 nova_compute[348325]: 2025-12-03 18:55:06.148 348329 INFO nova.virt.libvirt.driver [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Deletion of /var/lib/nova/instances/1ca1fbdb-089c-4544-821e-0542089b8424_del complete#033[00m
Dec  3 18:55:06 compute-0 nova_compute[348325]: 2025-12-03 18:55:06.230 348329 INFO nova.compute.manager [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Took 1.17 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 18:55:06 compute-0 nova_compute[348325]: 2025-12-03 18:55:06.230 348329 DEBUG oslo.service.loopingcall [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 18:55:06 compute-0 nova_compute[348325]: 2025-12-03 18:55:06.231 348329 DEBUG nova.compute.manager [-] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 18:55:06 compute-0 nova_compute[348325]: 2025-12-03 18:55:06.231 348329 DEBUG nova.network.neutron [-] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.439 348329 DEBUG nova.network.neutron [-] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.458 348329 INFO nova.compute.manager [-] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Took 1.23 seconds to deallocate network for instance.#033[00m
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.496 348329 DEBUG oslo_concurrency.lockutils [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.497 348329 DEBUG oslo_concurrency.lockutils [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.588 348329 DEBUG oslo_concurrency.processutils [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:55:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 48 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.1 KiB/s wr, 14 op/s
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.963 348329 DEBUG nova.compute.manager [req-b95037f5-aa18-4985-9411-e7927466bc98 req-59f309cd-8b5d-4ff8-88db-bb493a4fa194 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Received event network-vif-plugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.964 348329 DEBUG oslo_concurrency.lockutils [req-b95037f5-aa18-4985-9411-e7927466bc98 req-59f309cd-8b5d-4ff8-88db-bb493a4fa194 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.965 348329 DEBUG oslo_concurrency.lockutils [req-b95037f5-aa18-4985-9411-e7927466bc98 req-59f309cd-8b5d-4ff8-88db-bb493a4fa194 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.965 348329 DEBUG oslo_concurrency.lockutils [req-b95037f5-aa18-4985-9411-e7927466bc98 req-59f309cd-8b5d-4ff8-88db-bb493a4fa194 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.966 348329 DEBUG nova.compute.manager [req-b95037f5-aa18-4985-9411-e7927466bc98 req-59f309cd-8b5d-4ff8-88db-bb493a4fa194 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] No waiting events found dispatching network-vif-plugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.966 348329 WARNING nova.compute.manager [req-b95037f5-aa18-4985-9411-e7927466bc98 req-59f309cd-8b5d-4ff8-88db-bb493a4fa194 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Received unexpected event network-vif-plugged-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b for instance with vm_state deleted and task_state None.#033[00m
Dec  3 18:55:07 compute-0 nova_compute[348325]: 2025-12-03 18:55:07.967 348329 DEBUG nova.compute.manager [req-b95037f5-aa18-4985-9411-e7927466bc98 req-59f309cd-8b5d-4ff8-88db-bb493a4fa194 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Received event network-vif-deleted-3d8505a1-5c8c-4f6e-a5b6-7087f5d1600b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:55:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:55:08 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111089826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:55:08 compute-0 nova_compute[348325]: 2025-12-03 18:55:08.073 348329 DEBUG oslo_concurrency.processutils [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:55:08 compute-0 nova_compute[348325]: 2025-12-03 18:55:08.083 348329 DEBUG nova.compute.provider_tree [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:55:08 compute-0 nova_compute[348325]: 2025-12-03 18:55:08.103 348329 DEBUG nova.scheduler.client.report [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:55:08 compute-0 nova_compute[348325]: 2025-12-03 18:55:08.124 348329 DEBUG oslo_concurrency.lockutils [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:55:08 compute-0 nova_compute[348325]: 2025-12-03 18:55:08.152 348329 INFO nova.scheduler.client.report [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Deleted allocations for instance 1ca1fbdb-089c-4544-821e-0542089b8424#033[00m
Dec  3 18:55:08 compute-0 nova_compute[348325]: 2025-12-03 18:55:08.240 348329 DEBUG oslo_concurrency.lockutils [None req-6b9fc0e7-a30c-408d-9b8c-e15839155674 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Lock "1ca1fbdb-089c-4544-821e-0542089b8424" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:55:08 compute-0 nova_compute[348325]: 2025-12-03 18:55:08.244 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:09 compute-0 nova_compute[348325]: 2025-12-03 18:55:09.496 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:55:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 36 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.1 KiB/s wr, 15 op/s
Dec  3 18:55:10 compute-0 nova_compute[348325]: 2025-12-03 18:55:10.342 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:55:11 compute-0 podman[437826]: 2025-12-03 18:55:11.926223177 +0000 UTC m=+0.085694513 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=edpm, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec  3 18:55:11 compute-0 podman[437825]: 2025-12-03 18:55:11.927320354 +0000 UTC m=+0.083160352 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:55:11 compute-0 podman[437824]: 2025-12-03 18:55:11.95133646 +0000 UTC m=+0.114261651 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:55:13 compute-0 nova_compute[348325]: 2025-12-03 18:55:13.246 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.252 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.253 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.257 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.259 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c390e60>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.265 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.265 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.268 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.268 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.268 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.269 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.269 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.269 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.269 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.271 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:55:13.279 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.680344) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788113680494, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 871, "num_deletes": 251, "total_data_size": 1187027, "memory_usage": 1208496, "flush_reason": "Manual Compaction"}
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788113694857, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1175909, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33912, "largest_seqno": 34782, "table_properties": {"data_size": 1171476, "index_size": 2085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9608, "raw_average_key_size": 19, "raw_value_size": 1162667, "raw_average_value_size": 2367, "num_data_blocks": 93, "num_entries": 491, "num_filter_entries": 491, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764788034, "oldest_key_time": 1764788034, "file_creation_time": 1764788113, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 14589 microseconds, and 8288 cpu microseconds.
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.694937) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1175909 bytes OK
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.694961) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.698094) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.698116) EVENT_LOG_v1 {"time_micros": 1764788113698109, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.698137) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1182767, prev total WAL file size 1182767, number of live WAL files 2.
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.699420) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1148KB)], [77(7612KB)]
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788113699536, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 8971384, "oldest_snapshot_seqno": -1}
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5279 keys, 7213726 bytes, temperature: kUnknown
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788113756875, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 7213726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7180294, "index_size": 19114, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13253, "raw_key_size": 134537, "raw_average_key_size": 25, "raw_value_size": 7086480, "raw_average_value_size": 1342, "num_data_blocks": 782, "num_entries": 5279, "num_filter_entries": 5279, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764788113, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.757129) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 7213726 bytes
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.758953) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.3 rd, 125.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 7.4 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(13.8) write-amplify(6.1) OK, records in: 5793, records dropped: 514 output_compression: NoCompression
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.758973) EVENT_LOG_v1 {"time_micros": 1764788113758963, "job": 44, "event": "compaction_finished", "compaction_time_micros": 57409, "compaction_time_cpu_micros": 36333, "output_level": 6, "num_output_files": 1, "total_output_size": 7213726, "num_input_records": 5793, "num_output_records": 5279, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788113759426, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788113761915, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.699202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.762101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.762108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.762110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.762112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:55:13 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:55:13.762114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:55:13
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.control']
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:55:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:55:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:55:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:55:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:55:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:55:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:55:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:55:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:55:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:55:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:55:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:55:14 compute-0 podman[437888]: 2025-12-03 18:55:14.838812937 +0000 UTC m=+0.117415038 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 18:55:14 compute-0 podman[437889]: 2025-12-03 18:55:14.839239608 +0000 UTC m=+0.110291144 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:55:14 compute-0 podman[437887]: 2025-12-03 18:55:14.861002389 +0000 UTC m=+0.142004738 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container)
Dec  3 18:55:15 compute-0 nova_compute[348325]: 2025-12-03 18:55:15.346 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:15 compute-0 nova_compute[348325]: 2025-12-03 18:55:15.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:55:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:55:16 compute-0 nova_compute[348325]: 2025-12-03 18:55:16.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:55:17 compute-0 nova_compute[348325]: 2025-12-03 18:55:17.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:55:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 18:55:18 compute-0 nova_compute[348325]: 2025-12-03 18:55:18.249 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:18 compute-0 nova_compute[348325]: 2025-12-03 18:55:18.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:55:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:19 compute-0 nova_compute[348325]: 2025-12-03 18:55:19.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:55:19 compute-0 nova_compute[348325]: 2025-12-03 18:55:19.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:55:19 compute-0 nova_compute[348325]: 2025-12-03 18:55:19.509 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 18:55:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 25 op/s
Dec  3 18:55:20 compute-0 nova_compute[348325]: 2025-12-03 18:55:20.301 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764788105.300334, 1ca1fbdb-089c-4544-821e-0542089b8424 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:55:20 compute-0 nova_compute[348325]: 2025-12-03 18:55:20.302 348329 INFO nova.compute.manager [-] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] VM Stopped (Lifecycle Event)#033[00m
Dec  3 18:55:20 compute-0 nova_compute[348325]: 2025-12-03 18:55:20.332 348329 DEBUG nova.compute.manager [None req-4c90f201-91bb-4cec-8a9d-278e9d7a617f - - - - - -] [instance: 1ca1fbdb-089c-4544-821e-0542089b8424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:55:20 compute-0 nova_compute[348325]: 2025-12-03 18:55:20.350 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:20 compute-0 nova_compute[348325]: 2025-12-03 18:55:20.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:55:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 682 B/s wr, 24 op/s
Dec  3 18:55:23 compute-0 nova_compute[348325]: 2025-12-03 18:55:23.252 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:23.351 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:55:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:23.352 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:55:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:55:23.352 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:55:23 compute-0 nova_compute[348325]: 2025-12-03 18:55:23.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:55:23 compute-0 nova_compute[348325]: 2025-12-03 18:55:23.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:55:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:55:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:55:25 compute-0 nova_compute[348325]: 2025-12-03 18:55:25.354 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:25 compute-0 podman[437940]: 2025-12-03 18:55:25.948595085 +0000 UTC m=+0.097322807 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 18:55:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:28 compute-0 nova_compute[348325]: 2025-12-03 18:55:28.255 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:28 compute-0 nova_compute[348325]: 2025-12-03 18:55:28.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:55:28 compute-0 nova_compute[348325]: 2025-12-03 18:55:28.593 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:55:28 compute-0 nova_compute[348325]: 2025-12-03 18:55:28.593 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:55:28 compute-0 nova_compute[348325]: 2025-12-03 18:55:28.594 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:55:28 compute-0 nova_compute[348325]: 2025-12-03 18:55:28.594 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:55:28 compute-0 nova_compute[348325]: 2025-12-03 18:55:28.595 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:55:28 compute-0 podman[437984]: 2025-12-03 18:55:28.93916668 +0000 UTC m=+0.104820610 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 18:55:28 compute-0 podman[437983]: 2025-12-03 18:55:28.989179181 +0000 UTC m=+0.148470826 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  3 18:55:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:55:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222323555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:55:29 compute-0 nova_compute[348325]: 2025-12-03 18:55:29.036 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:55:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:29 compute-0 nova_compute[348325]: 2025-12-03 18:55:29.372 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:55:29 compute-0 nova_compute[348325]: 2025-12-03 18:55:29.373 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4163MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:55:29 compute-0 nova_compute[348325]: 2025-12-03 18:55:29.373 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:55:29 compute-0 nova_compute[348325]: 2025-12-03 18:55:29.374 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:55:29 compute-0 nova_compute[348325]: 2025-12-03 18:55:29.479 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:55:29 compute-0 nova_compute[348325]: 2025-12-03 18:55:29.480 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:55:29 compute-0 nova_compute[348325]: 2025-12-03 18:55:29.508 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:55:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:29 compute-0 podman[158200]: time="2025-12-03T18:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:55:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:55:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8172 "" "Go-http-client/1.1"
Dec  3 18:55:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:55:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270536740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:55:29 compute-0 nova_compute[348325]: 2025-12-03 18:55:29.975 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:55:29 compute-0 nova_compute[348325]: 2025-12-03 18:55:29.984 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:55:30 compute-0 nova_compute[348325]: 2025-12-03 18:55:30.007 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:55:30 compute-0 nova_compute[348325]: 2025-12-03 18:55:30.056 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:55:30 compute-0 nova_compute[348325]: 2025-12-03 18:55:30.056 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:55:30 compute-0 nova_compute[348325]: 2025-12-03 18:55:30.359 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:31 compute-0 openstack_network_exporter[365222]: ERROR   18:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:55:31 compute-0 openstack_network_exporter[365222]: ERROR   18:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:55:31 compute-0 openstack_network_exporter[365222]: ERROR   18:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:55:31 compute-0 openstack_network_exporter[365222]: ERROR   18:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:55:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:55:31 compute-0 openstack_network_exporter[365222]: ERROR   18:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:55:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:55:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:33 compute-0 nova_compute[348325]: 2025-12-03 18:55:33.259 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:35 compute-0 nova_compute[348325]: 2025-12-03 18:55:35.364 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:37 compute-0 ovn_controller[89305]: 2025-12-03T18:55:37Z|00064|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec  3 18:55:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:55:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268179006' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:55:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:55:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268179006' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:55:38 compute-0 nova_compute[348325]: 2025-12-03 18:55:38.263 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:55:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:55:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:55:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:55:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:55:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:55:38 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c139f46f-a6e2-4b63-9288-53b99de66cf0 does not exist
Dec  3 18:55:38 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c75ca2e8-7c3e-4883-b182-85cb9170d123 does not exist
Dec  3 18:55:38 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f0537053-d127-483e-8513-bff1c0c8d790 does not exist
Dec  3 18:55:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:55:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:55:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:55:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:55:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:55:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:55:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:39 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:55:39 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:55:39 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:55:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:40 compute-0 podman[438320]: 2025-12-03 18:55:40.248714572 +0000 UTC m=+0.090397637 container create bb6e6d380510cbca411e8f86569071e4ba4cb64d75dad96ae709608a9c2f6bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mirzakhani, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  3 18:55:40 compute-0 podman[438320]: 2025-12-03 18:55:40.20886106 +0000 UTC m=+0.050544175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:55:40 compute-0 systemd[1]: Started libpod-conmon-bb6e6d380510cbca411e8f86569071e4ba4cb64d75dad96ae709608a9c2f6bc6.scope.
Dec  3 18:55:40 compute-0 nova_compute[348325]: 2025-12-03 18:55:40.369 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:55:40 compute-0 podman[438320]: 2025-12-03 18:55:40.399367741 +0000 UTC m=+0.241050866 container init bb6e6d380510cbca411e8f86569071e4ba4cb64d75dad96ae709608a9c2f6bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mirzakhani, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:55:40 compute-0 podman[438320]: 2025-12-03 18:55:40.418074747 +0000 UTC m=+0.259757812 container start bb6e6d380510cbca411e8f86569071e4ba4cb64d75dad96ae709608a9c2f6bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:55:40 compute-0 podman[438320]: 2025-12-03 18:55:40.423888069 +0000 UTC m=+0.265571134 container attach bb6e6d380510cbca411e8f86569071e4ba4cb64d75dad96ae709608a9c2f6bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mirzakhani, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:55:40 compute-0 nostalgic_mirzakhani[438336]: 167 167
Dec  3 18:55:40 compute-0 systemd[1]: libpod-bb6e6d380510cbca411e8f86569071e4ba4cb64d75dad96ae709608a9c2f6bc6.scope: Deactivated successfully.
Dec  3 18:55:40 compute-0 podman[438320]: 2025-12-03 18:55:40.429685752 +0000 UTC m=+0.271368797 container died bb6e6d380510cbca411e8f86569071e4ba4cb64d75dad96ae709608a9c2f6bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:55:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7afc40e96ab1b1ac99bbce50aa47f9cc3e2d08d26df039cbaa88708e59327a57-merged.mount: Deactivated successfully.
Dec  3 18:55:40 compute-0 podman[438320]: 2025-12-03 18:55:40.487517013 +0000 UTC m=+0.329200048 container remove bb6e6d380510cbca411e8f86569071e4ba4cb64d75dad96ae709608a9c2f6bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mirzakhani, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:55:40 compute-0 systemd[1]: libpod-conmon-bb6e6d380510cbca411e8f86569071e4ba4cb64d75dad96ae709608a9c2f6bc6.scope: Deactivated successfully.
Dec  3 18:55:40 compute-0 podman[438359]: 2025-12-03 18:55:40.689665348 +0000 UTC m=+0.061310328 container create 42809bb4e89c7905e7597913771c112a882adc38e458a46c61ea737e83466a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  3 18:55:40 compute-0 systemd[1]: Started libpod-conmon-42809bb4e89c7905e7597913771c112a882adc38e458a46c61ea737e83466a81.scope.
Dec  3 18:55:40 compute-0 podman[438359]: 2025-12-03 18:55:40.660384723 +0000 UTC m=+0.032029723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:55:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885ea516aebc7b9478997c53f69cf916a0cf72e97ef1b68847c605c80eb78d17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885ea516aebc7b9478997c53f69cf916a0cf72e97ef1b68847c605c80eb78d17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885ea516aebc7b9478997c53f69cf916a0cf72e97ef1b68847c605c80eb78d17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885ea516aebc7b9478997c53f69cf916a0cf72e97ef1b68847c605c80eb78d17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885ea516aebc7b9478997c53f69cf916a0cf72e97ef1b68847c605c80eb78d17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:40 compute-0 podman[438359]: 2025-12-03 18:55:40.832340522 +0000 UTC m=+0.203985562 container init 42809bb4e89c7905e7597913771c112a882adc38e458a46c61ea737e83466a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:55:40 compute-0 podman[438359]: 2025-12-03 18:55:40.858562322 +0000 UTC m=+0.230207302 container start 42809bb4e89c7905e7597913771c112a882adc38e458a46c61ea737e83466a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:55:40 compute-0 podman[438359]: 2025-12-03 18:55:40.863707828 +0000 UTC m=+0.235352828 container attach 42809bb4e89c7905e7597913771c112a882adc38e458a46c61ea737e83466a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:55:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:42 compute-0 brave_keller[438374]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:55:42 compute-0 brave_keller[438374]: --> relative data size: 1.0
Dec  3 18:55:42 compute-0 brave_keller[438374]: --> All data devices are unavailable
Dec  3 18:55:42 compute-0 systemd[1]: libpod-42809bb4e89c7905e7597913771c112a882adc38e458a46c61ea737e83466a81.scope: Deactivated successfully.
Dec  3 18:55:42 compute-0 podman[438359]: 2025-12-03 18:55:42.127635346 +0000 UTC m=+1.499280426 container died 42809bb4e89c7905e7597913771c112a882adc38e458a46c61ea737e83466a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:55:42 compute-0 systemd[1]: libpod-42809bb4e89c7905e7597913771c112a882adc38e458a46c61ea737e83466a81.scope: Consumed 1.192s CPU time.
Dec  3 18:55:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-885ea516aebc7b9478997c53f69cf916a0cf72e97ef1b68847c605c80eb78d17-merged.mount: Deactivated successfully.
Dec  3 18:55:42 compute-0 podman[438359]: 2025-12-03 18:55:42.229178176 +0000 UTC m=+1.600823166 container remove 42809bb4e89c7905e7597913771c112a882adc38e458a46c61ea737e83466a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 18:55:42 compute-0 systemd[1]: libpod-conmon-42809bb4e89c7905e7597913771c112a882adc38e458a46c61ea737e83466a81.scope: Deactivated successfully.
Dec  3 18:55:42 compute-0 podman[438406]: 2025-12-03 18:55:42.297304259 +0000 UTC m=+0.104150444 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:55:42 compute-0 podman[438404]: 2025-12-03 18:55:42.302351082 +0000 UTC m=+0.119291973 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 18:55:42 compute-0 podman[438412]: 2025-12-03 18:55:42.315904213 +0000 UTC m=+0.118143736 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1755695350, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal)
Dec  3 18:55:43 compute-0 podman[438612]: 2025-12-03 18:55:43.028818369 +0000 UTC m=+0.063136132 container create 07af80646f8b9da2c9c4a9d968e6a6a827c6bf0f7711fb51f406b44a932a73ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:55:43 compute-0 systemd[1]: Started libpod-conmon-07af80646f8b9da2c9c4a9d968e6a6a827c6bf0f7711fb51f406b44a932a73ae.scope.
Dec  3 18:55:43 compute-0 podman[438612]: 2025-12-03 18:55:43.003918731 +0000 UTC m=+0.038236504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:55:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:55:43 compute-0 podman[438612]: 2025-12-03 18:55:43.144126894 +0000 UTC m=+0.178444707 container init 07af80646f8b9da2c9c4a9d968e6a6a827c6bf0f7711fb51f406b44a932a73ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 18:55:43 compute-0 podman[438612]: 2025-12-03 18:55:43.160095385 +0000 UTC m=+0.194413108 container start 07af80646f8b9da2c9c4a9d968e6a6a827c6bf0f7711fb51f406b44a932a73ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 18:55:43 compute-0 podman[438612]: 2025-12-03 18:55:43.164744338 +0000 UTC m=+0.199062161 container attach 07af80646f8b9da2c9c4a9d968e6a6a827c6bf0f7711fb51f406b44a932a73ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:55:43 compute-0 lucid_taussig[438628]: 167 167
Dec  3 18:55:43 compute-0 systemd[1]: libpod-07af80646f8b9da2c9c4a9d968e6a6a827c6bf0f7711fb51f406b44a932a73ae.scope: Deactivated successfully.
Dec  3 18:55:43 compute-0 podman[438612]: 2025-12-03 18:55:43.171476942 +0000 UTC m=+0.205794675 container died 07af80646f8b9da2c9c4a9d968e6a6a827c6bf0f7711fb51f406b44a932a73ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:55:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-70cea479643a2ea3bb015b306e1b61abab67d8b91f10ad3de9301785626a6c44-merged.mount: Deactivated successfully.
Dec  3 18:55:43 compute-0 podman[438612]: 2025-12-03 18:55:43.251865155 +0000 UTC m=+0.286182888 container remove 07af80646f8b9da2c9c4a9d968e6a6a827c6bf0f7711fb51f406b44a932a73ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 18:55:43 compute-0 systemd[1]: libpod-conmon-07af80646f8b9da2c9c4a9d968e6a6a827c6bf0f7711fb51f406b44a932a73ae.scope: Deactivated successfully.
Dec  3 18:55:43 compute-0 nova_compute[348325]: 2025-12-03 18:55:43.265 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:43 compute-0 podman[438650]: 2025-12-03 18:55:43.468437573 +0000 UTC m=+0.073414904 container create 2c10e3cece5877cc1b84f8604d2314b32fa73c25a0d7a4ca05386e82d710e25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wozniak, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:55:43 compute-0 podman[438650]: 2025-12-03 18:55:43.442285484 +0000 UTC m=+0.047262855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:55:43 compute-0 systemd[1]: Started libpod-conmon-2c10e3cece5877cc1b84f8604d2314b32fa73c25a0d7a4ca05386e82d710e25e.scope.
Dec  3 18:55:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73e7980e09b730725fa001878afc966366f552de95071c022a9625e6e3b6319c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73e7980e09b730725fa001878afc966366f552de95071c022a9625e6e3b6319c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73e7980e09b730725fa001878afc966366f552de95071c022a9625e6e3b6319c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73e7980e09b730725fa001878afc966366f552de95071c022a9625e6e3b6319c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:43 compute-0 podman[438650]: 2025-12-03 18:55:43.64117405 +0000 UTC m=+0.246151421 container init 2c10e3cece5877cc1b84f8604d2314b32fa73c25a0d7a4ca05386e82d710e25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:55:43 compute-0 podman[438650]: 2025-12-03 18:55:43.676040161 +0000 UTC m=+0.281017502 container start 2c10e3cece5877cc1b84f8604d2314b32fa73c25a0d7a4ca05386e82d710e25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wozniak, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:55:43 compute-0 podman[438650]: 2025-12-03 18:55:43.681077314 +0000 UTC m=+0.286054675 container attach 2c10e3cece5877cc1b84f8604d2314b32fa73c25a0d7a4ca05386e82d710e25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wozniak, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:55:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:55:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:55:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:55:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:55:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:55:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:55:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]: {
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:    "0": [
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:        {
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "devices": [
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "/dev/loop3"
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            ],
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_name": "ceph_lv0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_size": "21470642176",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "name": "ceph_lv0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "tags": {
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.cluster_name": "ceph",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.crush_device_class": "",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.encrypted": "0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.osd_id": "0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.type": "block",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.vdo": "0"
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            },
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "type": "block",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "vg_name": "ceph_vg0"
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:        }
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:    ],
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:    "1": [
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:        {
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "devices": [
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "/dev/loop4"
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            ],
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_name": "ceph_lv1",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_size": "21470642176",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "name": "ceph_lv1",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "tags": {
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.cluster_name": "ceph",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.crush_device_class": "",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.encrypted": "0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.osd_id": "1",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.type": "block",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.vdo": "0"
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            },
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "type": "block",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "vg_name": "ceph_vg1"
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:        }
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:    ],
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:    "2": [
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:        {
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "devices": [
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "/dev/loop5"
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            ],
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_name": "ceph_lv2",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_size": "21470642176",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "name": "ceph_lv2",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "tags": {
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.cluster_name": "ceph",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.crush_device_class": "",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.encrypted": "0",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.osd_id": "2",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.type": "block",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:                "ceph.vdo": "0"
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            },
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "type": "block",
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:            "vg_name": "ceph_vg2"
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:        }
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]:    ]
Dec  3 18:55:44 compute-0 eloquent_wozniak[438666]: }
Dec  3 18:55:44 compute-0 systemd[1]: libpod-2c10e3cece5877cc1b84f8604d2314b32fa73c25a0d7a4ca05386e82d710e25e.scope: Deactivated successfully.
Dec  3 18:55:44 compute-0 podman[438650]: 2025-12-03 18:55:44.508700871 +0000 UTC m=+1.113678192 container died 2c10e3cece5877cc1b84f8604d2314b32fa73c25a0d7a4ca05386e82d710e25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:55:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-73e7980e09b730725fa001878afc966366f552de95071c022a9625e6e3b6319c-merged.mount: Deactivated successfully.
Dec  3 18:55:44 compute-0 podman[438650]: 2025-12-03 18:55:44.58035806 +0000 UTC m=+1.185335401 container remove 2c10e3cece5877cc1b84f8604d2314b32fa73c25a0d7a4ca05386e82d710e25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wozniak, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:55:44 compute-0 systemd[1]: libpod-conmon-2c10e3cece5877cc1b84f8604d2314b32fa73c25a0d7a4ca05386e82d710e25e.scope: Deactivated successfully.
Dec  3 18:55:45 compute-0 podman[438762]: 2025-12-03 18:55:45.00099416 +0000 UTC m=+0.067780426 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  3 18:55:45 compute-0 podman[438760]: 2025-12-03 18:55:45.001924003 +0000 UTC m=+0.073725922 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, release-0.7.12=)
Dec  3 18:55:45 compute-0 podman[438761]: 2025-12-03 18:55:45.037738667 +0000 UTC m=+0.108051938 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:55:45 compute-0 nova_compute[348325]: 2025-12-03 18:55:45.373 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:45 compute-0 podman[438878]: 2025-12-03 18:55:45.41961105 +0000 UTC m=+0.040013637 container create 40c934c9f671badf698fc9ceb5259f53ee0bc9638d22dff433f8df5b9b4fa118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec  3 18:55:45 compute-0 systemd[1]: Started libpod-conmon-40c934c9f671badf698fc9ceb5259f53ee0bc9638d22dff433f8df5b9b4fa118.scope.
Dec  3 18:55:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:55:45 compute-0 podman[438878]: 2025-12-03 18:55:45.403149839 +0000 UTC m=+0.023552456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:55:45 compute-0 podman[438878]: 2025-12-03 18:55:45.510573941 +0000 UTC m=+0.130976568 container init 40c934c9f671badf698fc9ceb5259f53ee0bc9638d22dff433f8df5b9b4fa118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:55:45 compute-0 podman[438878]: 2025-12-03 18:55:45.521812485 +0000 UTC m=+0.142215092 container start 40c934c9f671badf698fc9ceb5259f53ee0bc9638d22dff433f8df5b9b4fa118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 18:55:45 compute-0 podman[438878]: 2025-12-03 18:55:45.526968462 +0000 UTC m=+0.147371149 container attach 40c934c9f671badf698fc9ceb5259f53ee0bc9638d22dff433f8df5b9b4fa118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:55:45 compute-0 trusting_euler[438894]: 167 167
Dec  3 18:55:45 compute-0 systemd[1]: libpod-40c934c9f671badf698fc9ceb5259f53ee0bc9638d22dff433f8df5b9b4fa118.scope: Deactivated successfully.
Dec  3 18:55:45 compute-0 podman[438878]: 2025-12-03 18:55:45.534108946 +0000 UTC m=+0.154511583 container died 40c934c9f671badf698fc9ceb5259f53ee0bc9638d22dff433f8df5b9b4fa118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:55:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f135df659ea93c69bb73b6dcf97062002a4f718937665680060497d86153789-merged.mount: Deactivated successfully.
Dec  3 18:55:45 compute-0 podman[438878]: 2025-12-03 18:55:45.584847794 +0000 UTC m=+0.205250391 container remove 40c934c9f671badf698fc9ceb5259f53ee0bc9638d22dff433f8df5b9b4fa118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:55:45 compute-0 systemd[1]: libpod-conmon-40c934c9f671badf698fc9ceb5259f53ee0bc9638d22dff433f8df5b9b4fa118.scope: Deactivated successfully.
Dec  3 18:55:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:45 compute-0 podman[438917]: 2025-12-03 18:55:45.812824741 +0000 UTC m=+0.058165471 container create 8a40258418fc3b6e36ba81fa8ee1c637556a6629b44feeba4762c314d96d6b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:55:45 compute-0 systemd[1]: Started libpod-conmon-8a40258418fc3b6e36ba81fa8ee1c637556a6629b44feeba4762c314d96d6b69.scope.
Dec  3 18:55:45 compute-0 podman[438917]: 2025-12-03 18:55:45.786717584 +0000 UTC m=+0.032058284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:55:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:55:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96632baddc81c02103d246873240115a7df70e3b0f0284ed7f5c5e98396567f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96632baddc81c02103d246873240115a7df70e3b0f0284ed7f5c5e98396567f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96632baddc81c02103d246873240115a7df70e3b0f0284ed7f5c5e98396567f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96632baddc81c02103d246873240115a7df70e3b0f0284ed7f5c5e98396567f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:55:45 compute-0 podman[438917]: 2025-12-03 18:55:45.935767813 +0000 UTC m=+0.181108563 container init 8a40258418fc3b6e36ba81fa8ee1c637556a6629b44feeba4762c314d96d6b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 18:55:45 compute-0 podman[438917]: 2025-12-03 18:55:45.956041178 +0000 UTC m=+0.201381848 container start 8a40258418fc3b6e36ba81fa8ee1c637556a6629b44feeba4762c314d96d6b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:55:45 compute-0 podman[438917]: 2025-12-03 18:55:45.960865975 +0000 UTC m=+0.206206645 container attach 8a40258418fc3b6e36ba81fa8ee1c637556a6629b44feeba4762c314d96d6b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]: {
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "osd_id": 1,
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "type": "bluestore"
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:    },
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "osd_id": 2,
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "type": "bluestore"
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:    },
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "osd_id": 0,
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:        "type": "bluestore"
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]:    }
Dec  3 18:55:47 compute-0 zealous_lamarr[438934]: }
Dec  3 18:55:47 compute-0 systemd[1]: libpod-8a40258418fc3b6e36ba81fa8ee1c637556a6629b44feeba4762c314d96d6b69.scope: Deactivated successfully.
Dec  3 18:55:47 compute-0 podman[438917]: 2025-12-03 18:55:47.063754433 +0000 UTC m=+1.309095113 container died 8a40258418fc3b6e36ba81fa8ee1c637556a6629b44feeba4762c314d96d6b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:55:47 compute-0 systemd[1]: libpod-8a40258418fc3b6e36ba81fa8ee1c637556a6629b44feeba4762c314d96d6b69.scope: Consumed 1.103s CPU time.
Dec  3 18:55:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-96632baddc81c02103d246873240115a7df70e3b0f0284ed7f5c5e98396567f6-merged.mount: Deactivated successfully.
Dec  3 18:55:47 compute-0 podman[438917]: 2025-12-03 18:55:47.147872177 +0000 UTC m=+1.393212857 container remove 8a40258418fc3b6e36ba81fa8ee1c637556a6629b44feeba4762c314d96d6b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 18:55:47 compute-0 systemd[1]: libpod-conmon-8a40258418fc3b6e36ba81fa8ee1c637556a6629b44feeba4762c314d96d6b69.scope: Deactivated successfully.
Dec  3 18:55:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:55:47 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:55:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:55:47 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:55:47 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 165043a0-0d7b-4d94-8726-ea486fa810ae does not exist
Dec  3 18:55:47 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 83877fc5-659a-4dcb-9baa-f03e8bce68d1 does not exist
Dec  3 18:55:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:48 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:55:48 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:55:48 compute-0 nova_compute[348325]: 2025-12-03 18:55:48.270 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:50 compute-0 nova_compute[348325]: 2025-12-03 18:55:50.377 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:53 compute-0 nova_compute[348325]: 2025-12-03 18:55:53.270 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:55 compute-0 nova_compute[348325]: 2025-12-03 18:55:55.382 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:56 compute-0 podman[439033]: 2025-12-03 18:55:56.94642219 +0000 UTC m=+0.105388444 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:55:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:58 compute-0 nova_compute[348325]: 2025-12-03 18:55:58.274 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:55:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:55:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:55:59 compute-0 podman[158200]: time="2025-12-03T18:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:55:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:55:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8173 "" "Go-http-client/1.1"
Dec  3 18:55:59 compute-0 podman[439056]: 2025-12-03 18:55:59.962057488 +0000 UTC m=+0.107336252 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 18:56:00 compute-0 podman[439055]: 2025-12-03 18:56:00.016936587 +0000 UTC m=+0.167216933 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 18:56:00 compute-0 nova_compute[348325]: 2025-12-03 18:56:00.385 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:01 compute-0 openstack_network_exporter[365222]: ERROR   18:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:56:01 compute-0 openstack_network_exporter[365222]: ERROR   18:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:56:01 compute-0 openstack_network_exporter[365222]: ERROR   18:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:56:01 compute-0 openstack_network_exporter[365222]: ERROR   18:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:56:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:56:01 compute-0 openstack_network_exporter[365222]: ERROR   18:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:56:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:56:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:03 compute-0 nova_compute[348325]: 2025-12-03 18:56:03.278 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:05 compute-0 nova_compute[348325]: 2025-12-03 18:56:05.389 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:08 compute-0 nova_compute[348325]: 2025-12-03 18:56:08.280 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:10 compute-0 nova_compute[348325]: 2025-12-03 18:56:10.393 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Dec  3 18:56:12 compute-0 podman[439102]: 2025-12-03 18:56:12.950260528 +0000 UTC m=+0.098452045 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Dec  3 18:56:12 compute-0 podman[439101]: 2025-12-03 18:56:12.951735764 +0000 UTC m=+0.108482589 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 18:56:12 compute-0 podman[439100]: 2025-12-03 18:56:12.954493951 +0000 UTC m=+0.114505207 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec  3 18:56:13 compute-0 nova_compute[348325]: 2025-12-03 18:56:13.048 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:56:13 compute-0 nova_compute[348325]: 2025-12-03 18:56:13.283 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:56:13
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.control', '.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.log']
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:56:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:56:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:56:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:56:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:56:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:56:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:56:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:56:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:56:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:56:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:56:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:56:15 compute-0 nova_compute[348325]: 2025-12-03 18:56:15.398 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:15 compute-0 nova_compute[348325]: 2025-12-03 18:56:15.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:56:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Dec  3 18:56:15 compute-0 podman[439162]: 2025-12-03 18:56:15.899656518 +0000 UTC m=+0.067263103 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, version=9.4, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9, architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 18:56:15 compute-0 podman[439169]: 2025-12-03 18:56:15.917985986 +0000 UTC m=+0.073296121 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:56:15 compute-0 podman[439163]: 2025-12-03 18:56:15.922611648 +0000 UTC m=+0.080762453 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  3 18:56:17 compute-0 nova_compute[348325]: 2025-12-03 18:56:17.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:56:17 compute-0 nova_compute[348325]: 2025-12-03 18:56:17.489 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:56:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:56:18 compute-0 nova_compute[348325]: 2025-12-03 18:56:18.284 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:19 compute-0 nova_compute[348325]: 2025-12-03 18:56:19.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:56:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:56:20 compute-0 nova_compute[348325]: 2025-12-03 18:56:20.403 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:20 compute-0 nova_compute[348325]: 2025-12-03 18:56:20.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:56:20 compute-0 nova_compute[348325]: 2025-12-03 18:56:20.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:56:20 compute-0 nova_compute[348325]: 2025-12-03 18:56:20.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:56:20 compute-0 nova_compute[348325]: 2025-12-03 18:56:20.506 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 18:56:21 compute-0 nova_compute[348325]: 2025-12-03 18:56:21.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:56:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 18:56:23 compute-0 nova_compute[348325]: 2025-12-03 18:56:23.286 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:56:23.352 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:56:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:56:23.353 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:56:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:56:23.353 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:56:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Dec  3 18:56:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 18:56:24 compute-0 nova_compute[348325]: 2025-12-03 18:56:24.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:56:24 compute-0 nova_compute[348325]: 2025-12-03 18:56:24.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:56:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:56:25 compute-0 nova_compute[348325]: 2025-12-03 18:56:25.408 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Dec  3 18:56:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:56:26.225 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:56:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:56:26.227 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:56:26 compute-0 nova_compute[348325]: 2025-12-03 18:56:26.226 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec  3 18:56:27 compute-0 podman[439217]: 2025-12-03 18:56:27.930388156 +0000 UTC m=+0.091195797 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 18:56:28 compute-0 nova_compute[348325]: 2025-12-03 18:56:28.288 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:29 compute-0 podman[158200]: time="2025-12-03T18:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:56:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:56:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8170 "" "Go-http-client/1.1"
Dec  3 18:56:30 compute-0 nova_compute[348325]: 2025-12-03 18:56:30.412 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:30 compute-0 nova_compute[348325]: 2025-12-03 18:56:30.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:56:30 compute-0 nova_compute[348325]: 2025-12-03 18:56:30.522 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:56:30 compute-0 nova_compute[348325]: 2025-12-03 18:56:30.522 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:56:30 compute-0 nova_compute[348325]: 2025-12-03 18:56:30.523 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:56:30 compute-0 nova_compute[348325]: 2025-12-03 18:56:30.523 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:56:30 compute-0 nova_compute[348325]: 2025-12-03 18:56:30.523 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:56:30 compute-0 podman[439262]: 2025-12-03 18:56:30.958921118 +0000 UTC m=+0.117128400 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 18:56:30 compute-0 podman[439261]: 2025-12-03 18:56:30.990405227 +0000 UTC m=+0.147839601 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:56:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:56:31 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1427026803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:56:31 compute-0 nova_compute[348325]: 2025-12-03 18:56:31.068 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:56:31 compute-0 openstack_network_exporter[365222]: ERROR   18:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:56:31 compute-0 openstack_network_exporter[365222]: ERROR   18:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:56:31 compute-0 openstack_network_exporter[365222]: ERROR   18:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:56:31 compute-0 openstack_network_exporter[365222]: ERROR   18:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:56:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:56:31 compute-0 openstack_network_exporter[365222]: ERROR   18:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:56:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:56:31 compute-0 nova_compute[348325]: 2025-12-03 18:56:31.448 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:56:31 compute-0 nova_compute[348325]: 2025-12-03 18:56:31.450 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4144MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:56:31 compute-0 nova_compute[348325]: 2025-12-03 18:56:31.451 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:56:31 compute-0 nova_compute[348325]: 2025-12-03 18:56:31.451 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:56:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:31 compute-0 nova_compute[348325]: 2025-12-03 18:56:31.706 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:56:31 compute-0 nova_compute[348325]: 2025-12-03 18:56:31.707 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:56:31 compute-0 nova_compute[348325]: 2025-12-03 18:56:31.736 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:56:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:56:32 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1355683893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:56:32 compute-0 nova_compute[348325]: 2025-12-03 18:56:32.227 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:56:32 compute-0 nova_compute[348325]: 2025-12-03 18:56:32.236 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:56:32 compute-0 nova_compute[348325]: 2025-12-03 18:56:32.266 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:56:32 compute-0 nova_compute[348325]: 2025-12-03 18:56:32.270 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:56:32 compute-0 nova_compute[348325]: 2025-12-03 18:56:32.271 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:56:33 compute-0 nova_compute[348325]: 2025-12-03 18:56:33.264 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:56:33 compute-0 nova_compute[348325]: 2025-12-03 18:56:33.293 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 24 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 683 KiB/s wr, 5 op/s
Dec  3 18:56:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec  3 18:56:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec  3 18:56:33 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec  3 18:56:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:56:34.230 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:56:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:35 compute-0 nova_compute[348325]: 2025-12-03 18:56:35.417 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 32 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 1.6 MiB/s wr, 11 op/s
Dec  3 18:56:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec  3 18:56:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec  3 18:56:35 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec  3 18:56:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 48 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 4.0 MiB/s wr, 17 op/s
Dec  3 18:56:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:56:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192847701' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:56:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:56:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192847701' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:56:38 compute-0 nova_compute[348325]: 2025-12-03 18:56:38.295 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 57 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec  3 18:56:40 compute-0 nova_compute[348325]: 2025-12-03 18:56:40.422 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 39 op/s
Dec  3 18:56:43 compute-0 nova_compute[348325]: 2025-12-03 18:56:43.297 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.3 MiB/s wr, 31 op/s
Dec  3 18:56:43 compute-0 podman[439333]: 2025-12-03 18:56:43.940286853 +0000 UTC m=+0.090381789 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter)
Dec  3 18:56:43 compute-0 podman[439331]: 2025-12-03 18:56:43.947275173 +0000 UTC m=+0.099773888 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 18:56:43 compute-0 podman[439332]: 2025-12-03 18:56:43.950646785 +0000 UTC m=+0.099406398 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:56:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:56:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:56:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:56:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:56:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:56:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:56:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:45 compute-0 nova_compute[348325]: 2025-12-03 18:56:45.426 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.5 MiB/s wr, 26 op/s
Dec  3 18:56:46 compute-0 podman[439394]: 2025-12-03 18:56:46.93875413 +0000 UTC m=+0.104416120 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 18:56:46 compute-0 podman[439396]: 2025-12-03 18:56:46.963576626 +0000 UTC m=+0.114902946 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:56:46 compute-0 podman[439395]: 2025-12-03 18:56:46.972685829 +0000 UTC m=+0.124910681 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 18:56:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.1 MiB/s wr, 22 op/s
Dec  3 18:56:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:56:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:56:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:56:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:56:48 compute-0 nova_compute[348325]: 2025-12-03 18:56:48.300 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:48 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:56:48 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:56:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:56:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:56:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:56:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:56:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:56:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:56:49 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 54e48c87-8397-40d7-aa96-56e4871b68ae does not exist
Dec  3 18:56:49 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 2614544d-e900-47ce-a632-a704f853620f does not exist
Dec  3 18:56:49 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6feda9e3-193a-4c92-bc64-0c5443791bfd does not exist
Dec  3 18:56:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:56:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:56:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:56:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:56:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:56:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:56:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 759 KiB/s wr, 19 op/s
Dec  3 18:56:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:56:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:56:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:56:50 compute-0 podman[439839]: 2025-12-03 18:56:50.070875 +0000 UTC m=+0.100342700 container create 77bb8e6b2522c1037f0be7d602aab6216f1d5f17593bce45c577c4ccb8cb2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 18:56:50 compute-0 podman[439839]: 2025-12-03 18:56:50.034284637 +0000 UTC m=+0.063752387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:56:50 compute-0 systemd[1]: Started libpod-conmon-77bb8e6b2522c1037f0be7d602aab6216f1d5f17593bce45c577c4ccb8cb2de0.scope.
Dec  3 18:56:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:56:50 compute-0 podman[439839]: 2025-12-03 18:56:50.206183544 +0000 UTC m=+0.235651274 container init 77bb8e6b2522c1037f0be7d602aab6216f1d5f17593bce45c577c4ccb8cb2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hugle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Dec  3 18:56:50 compute-0 podman[439839]: 2025-12-03 18:56:50.222964634 +0000 UTC m=+0.252432324 container start 77bb8e6b2522c1037f0be7d602aab6216f1d5f17593bce45c577c4ccb8cb2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 18:56:50 compute-0 podman[439839]: 2025-12-03 18:56:50.227168757 +0000 UTC m=+0.256636487 container attach 77bb8e6b2522c1037f0be7d602aab6216f1d5f17593bce45c577c4ccb8cb2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hugle, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:56:50 compute-0 elated_hugle[439855]: 167 167
Dec  3 18:56:50 compute-0 systemd[1]: libpod-77bb8e6b2522c1037f0be7d602aab6216f1d5f17593bce45c577c4ccb8cb2de0.scope: Deactivated successfully.
Dec  3 18:56:50 compute-0 podman[439839]: 2025-12-03 18:56:50.234893385 +0000 UTC m=+0.264361095 container died 77bb8e6b2522c1037f0be7d602aab6216f1d5f17593bce45c577c4ccb8cb2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 18:56:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ace1e1583be2bb37bbde4383a290bfdf7bf905101d0697eeb7f20fa3cadfd14-merged.mount: Deactivated successfully.
Dec  3 18:56:50 compute-0 podman[439839]: 2025-12-03 18:56:50.294258495 +0000 UTC m=+0.323726185 container remove 77bb8e6b2522c1037f0be7d602aab6216f1d5f17593bce45c577c4ccb8cb2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hugle, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:56:50 compute-0 systemd[1]: libpod-conmon-77bb8e6b2522c1037f0be7d602aab6216f1d5f17593bce45c577c4ccb8cb2de0.scope: Deactivated successfully.
Dec  3 18:56:50 compute-0 nova_compute[348325]: 2025-12-03 18:56:50.430 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:50 compute-0 podman[439878]: 2025-12-03 18:56:50.526967546 +0000 UTC m=+0.080334432 container create 21047beaa3da388c33d4616fb03718fc453c6639066ef60b1782a59b387fe870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:56:50 compute-0 podman[439878]: 2025-12-03 18:56:50.490544897 +0000 UTC m=+0.043911863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:56:50 compute-0 systemd[1]: Started libpod-conmon-21047beaa3da388c33d4616fb03718fc453c6639066ef60b1782a59b387fe870.scope.
Dec  3 18:56:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b6446ca9b470440b0e74058cbeea7587589b49ab4279885eff55daa652b66c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b6446ca9b470440b0e74058cbeea7587589b49ab4279885eff55daa652b66c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b6446ca9b470440b0e74058cbeea7587589b49ab4279885eff55daa652b66c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b6446ca9b470440b0e74058cbeea7587589b49ab4279885eff55daa652b66c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b6446ca9b470440b0e74058cbeea7587589b49ab4279885eff55daa652b66c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:50 compute-0 podman[439878]: 2025-12-03 18:56:50.654173272 +0000 UTC m=+0.207540168 container init 21047beaa3da388c33d4616fb03718fc453c6639066ef60b1782a59b387fe870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 18:56:50 compute-0 podman[439878]: 2025-12-03 18:56:50.665909948 +0000 UTC m=+0.219276824 container start 21047beaa3da388c33d4616fb03718fc453c6639066ef60b1782a59b387fe870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:56:50 compute-0 podman[439878]: 2025-12-03 18:56:50.670417508 +0000 UTC m=+0.223784384 container attach 21047beaa3da388c33d4616fb03718fc453c6639066ef60b1782a59b387fe870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sammet, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:56:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:51 compute-0 gracious_sammet[439894]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:56:51 compute-0 gracious_sammet[439894]: --> relative data size: 1.0
Dec  3 18:56:51 compute-0 gracious_sammet[439894]: --> All data devices are unavailable
Dec  3 18:56:51 compute-0 systemd[1]: libpod-21047beaa3da388c33d4616fb03718fc453c6639066ef60b1782a59b387fe870.scope: Deactivated successfully.
Dec  3 18:56:51 compute-0 systemd[1]: libpod-21047beaa3da388c33d4616fb03718fc453c6639066ef60b1782a59b387fe870.scope: Consumed 1.070s CPU time.
Dec  3 18:56:51 compute-0 podman[439923]: 2025-12-03 18:56:51.864053461 +0000 UTC m=+0.042908539 container died 21047beaa3da388c33d4616fb03718fc453c6639066ef60b1782a59b387fe870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:56:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-93b6446ca9b470440b0e74058cbeea7587589b49ab4279885eff55daa652b66c-merged.mount: Deactivated successfully.
Dec  3 18:56:51 compute-0 podman[439923]: 2025-12-03 18:56:51.95656975 +0000 UTC m=+0.135424708 container remove 21047beaa3da388c33d4616fb03718fc453c6639066ef60b1782a59b387fe870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sammet, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:56:51 compute-0 systemd[1]: libpod-conmon-21047beaa3da388c33d4616fb03718fc453c6639066ef60b1782a59b387fe870.scope: Deactivated successfully.
Dec  3 18:56:52 compute-0 podman[440073]: 2025-12-03 18:56:52.817731465 +0000 UTC m=+0.066721020 container create 075650e763831316077f5253cdce3fe2f11701a8cdfe106909a234cd92b14cbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:56:52 compute-0 systemd[1]: Started libpod-conmon-075650e763831316077f5253cdce3fe2f11701a8cdfe106909a234cd92b14cbc.scope.
Dec  3 18:56:52 compute-0 podman[440073]: 2025-12-03 18:56:52.791227118 +0000 UTC m=+0.040216653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:56:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:56:52 compute-0 podman[440073]: 2025-12-03 18:56:52.924260356 +0000 UTC m=+0.173249891 container init 075650e763831316077f5253cdce3fe2f11701a8cdfe106909a234cd92b14cbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 18:56:52 compute-0 podman[440073]: 2025-12-03 18:56:52.936175737 +0000 UTC m=+0.185165262 container start 075650e763831316077f5253cdce3fe2f11701a8cdfe106909a234cd92b14cbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:56:52 compute-0 vibrant_saha[440088]: 167 167
Dec  3 18:56:52 compute-0 systemd[1]: libpod-075650e763831316077f5253cdce3fe2f11701a8cdfe106909a234cd92b14cbc.scope: Deactivated successfully.
Dec  3 18:56:52 compute-0 podman[440073]: 2025-12-03 18:56:52.941058086 +0000 UTC m=+0.190047621 container attach 075650e763831316077f5253cdce3fe2f11701a8cdfe106909a234cd92b14cbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:56:52 compute-0 podman[440073]: 2025-12-03 18:56:52.948525879 +0000 UTC m=+0.197515434 container died 075650e763831316077f5253cdce3fe2f11701a8cdfe106909a234cd92b14cbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 18:56:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d647e4c932f07e830fd04464904f6e1a4e301c5603f514a1cc8f27592f5eecd-merged.mount: Deactivated successfully.
Dec  3 18:56:53 compute-0 podman[440073]: 2025-12-03 18:56:53.030116021 +0000 UTC m=+0.279105556 container remove 075650e763831316077f5253cdce3fe2f11701a8cdfe106909a234cd92b14cbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 18:56:53 compute-0 systemd[1]: libpod-conmon-075650e763831316077f5253cdce3fe2f11701a8cdfe106909a234cd92b14cbc.scope: Deactivated successfully.
Dec  3 18:56:53 compute-0 podman[440109]: 2025-12-03 18:56:53.261869189 +0000 UTC m=+0.056958111 container create 39059b2000737740011cd054acc363e9775a6a096357efc62f11e5b843497e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 18:56:53 compute-0 nova_compute[348325]: 2025-12-03 18:56:53.302 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:53 compute-0 systemd[1]: Started libpod-conmon-39059b2000737740011cd054acc363e9775a6a096357efc62f11e5b843497e76.scope.
Dec  3 18:56:53 compute-0 podman[440109]: 2025-12-03 18:56:53.241959093 +0000 UTC m=+0.037048035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:56:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c8654ef46b7548fbe87a9d0c0af9eb1475202aa69538e9b0a05d0bd759a852/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c8654ef46b7548fbe87a9d0c0af9eb1475202aa69538e9b0a05d0bd759a852/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c8654ef46b7548fbe87a9d0c0af9eb1475202aa69538e9b0a05d0bd759a852/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c8654ef46b7548fbe87a9d0c0af9eb1475202aa69538e9b0a05d0bd759a852/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:53 compute-0 podman[440109]: 2025-12-03 18:56:53.394974398 +0000 UTC m=+0.190063340 container init 39059b2000737740011cd054acc363e9775a6a096357efc62f11e5b843497e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:56:53 compute-0 podman[440109]: 2025-12-03 18:56:53.411933833 +0000 UTC m=+0.207022755 container start 39059b2000737740011cd054acc363e9775a6a096357efc62f11e5b843497e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 18:56:53 compute-0 podman[440109]: 2025-12-03 18:56:53.415586972 +0000 UTC m=+0.210675894 container attach 39059b2000737740011cd054acc363e9775a6a096357efc62f11e5b843497e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 18:56:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:54 compute-0 naughty_carson[440124]: {
Dec  3 18:56:54 compute-0 naughty_carson[440124]:    "0": [
Dec  3 18:56:54 compute-0 naughty_carson[440124]:        {
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "devices": [
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "/dev/loop3"
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            ],
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_name": "ceph_lv0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_size": "21470642176",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "name": "ceph_lv0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "tags": {
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.cluster_name": "ceph",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.crush_device_class": "",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.encrypted": "0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.osd_id": "0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.type": "block",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.vdo": "0"
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            },
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "type": "block",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "vg_name": "ceph_vg0"
Dec  3 18:56:54 compute-0 naughty_carson[440124]:        }
Dec  3 18:56:54 compute-0 naughty_carson[440124]:    ],
Dec  3 18:56:54 compute-0 naughty_carson[440124]:    "1": [
Dec  3 18:56:54 compute-0 naughty_carson[440124]:        {
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "devices": [
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "/dev/loop4"
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            ],
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_name": "ceph_lv1",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_size": "21470642176",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "name": "ceph_lv1",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "tags": {
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.cluster_name": "ceph",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.crush_device_class": "",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.encrypted": "0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.osd_id": "1",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.type": "block",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.vdo": "0"
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            },
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "type": "block",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "vg_name": "ceph_vg1"
Dec  3 18:56:54 compute-0 naughty_carson[440124]:        }
Dec  3 18:56:54 compute-0 naughty_carson[440124]:    ],
Dec  3 18:56:54 compute-0 naughty_carson[440124]:    "2": [
Dec  3 18:56:54 compute-0 naughty_carson[440124]:        {
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "devices": [
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "/dev/loop5"
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            ],
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_name": "ceph_lv2",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_size": "21470642176",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "name": "ceph_lv2",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "tags": {
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.cluster_name": "ceph",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.crush_device_class": "",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.encrypted": "0",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.osd_id": "2",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.type": "block",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:                "ceph.vdo": "0"
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            },
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "type": "block",
Dec  3 18:56:54 compute-0 naughty_carson[440124]:            "vg_name": "ceph_vg2"
Dec  3 18:56:54 compute-0 naughty_carson[440124]:        }
Dec  3 18:56:54 compute-0 naughty_carson[440124]:    ]
Dec  3 18:56:54 compute-0 naughty_carson[440124]: }
Dec  3 18:56:54 compute-0 systemd[1]: libpod-39059b2000737740011cd054acc363e9775a6a096357efc62f11e5b843497e76.scope: Deactivated successfully.
Dec  3 18:56:54 compute-0 podman[440109]: 2025-12-03 18:56:54.252165898 +0000 UTC m=+1.047254860 container died 39059b2000737740011cd054acc363e9775a6a096357efc62f11e5b843497e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:56:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6c8654ef46b7548fbe87a9d0c0af9eb1475202aa69538e9b0a05d0bd759a852-merged.mount: Deactivated successfully.
Dec  3 18:56:54 compute-0 podman[440109]: 2025-12-03 18:56:54.332815046 +0000 UTC m=+1.127903998 container remove 39059b2000737740011cd054acc363e9775a6a096357efc62f11e5b843497e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:56:54 compute-0 systemd[1]: libpod-conmon-39059b2000737740011cd054acc363e9775a6a096357efc62f11e5b843497e76.scope: Deactivated successfully.
Dec  3 18:56:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:55 compute-0 podman[440281]: 2025-12-03 18:56:55.30000498 +0000 UTC m=+0.073321281 container create 8f03afc28213454c29878d5e6173a59c81b11c6df59af3ae04c0a9a1a7a2e520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haibt, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:56:55 compute-0 podman[440281]: 2025-12-03 18:56:55.27336887 +0000 UTC m=+0.046685201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:56:55 compute-0 systemd[1]: Started libpod-conmon-8f03afc28213454c29878d5e6173a59c81b11c6df59af3ae04c0a9a1a7a2e520.scope.
Dec  3 18:56:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:56:55 compute-0 podman[440281]: 2025-12-03 18:56:55.430800613 +0000 UTC m=+0.204116984 container init 8f03afc28213454c29878d5e6173a59c81b11c6df59af3ae04c0a9a1a7a2e520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haibt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:56:55 compute-0 nova_compute[348325]: 2025-12-03 18:56:55.437 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:55 compute-0 podman[440281]: 2025-12-03 18:56:55.450647999 +0000 UTC m=+0.223964330 container start 8f03afc28213454c29878d5e6173a59c81b11c6df59af3ae04c0a9a1a7a2e520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haibt, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:56:55 compute-0 podman[440281]: 2025-12-03 18:56:55.459203537 +0000 UTC m=+0.232519858 container attach 8f03afc28213454c29878d5e6173a59c81b11c6df59af3ae04c0a9a1a7a2e520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 18:56:55 compute-0 boring_haibt[440297]: 167 167
Dec  3 18:56:55 compute-0 systemd[1]: libpod-8f03afc28213454c29878d5e6173a59c81b11c6df59af3ae04c0a9a1a7a2e520.scope: Deactivated successfully.
Dec  3 18:56:55 compute-0 podman[440281]: 2025-12-03 18:56:55.463145524 +0000 UTC m=+0.236461845 container died 8f03afc28213454c29878d5e6173a59c81b11c6df59af3ae04c0a9a1a7a2e520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:56:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4c0a60b556a73cf1093c85366b308343a4a6e36a28f11049898a5364a494a56-merged.mount: Deactivated successfully.
Dec  3 18:56:55 compute-0 podman[440281]: 2025-12-03 18:56:55.550824014 +0000 UTC m=+0.324140335 container remove 8f03afc28213454c29878d5e6173a59c81b11c6df59af3ae04c0a9a1a7a2e520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 18:56:55 compute-0 systemd[1]: libpod-conmon-8f03afc28213454c29878d5e6173a59c81b11c6df59af3ae04c0a9a1a7a2e520.scope: Deactivated successfully.
Dec  3 18:56:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:55 compute-0 podman[440319]: 2025-12-03 18:56:55.74685796 +0000 UTC m=+0.056247294 container create f814a483a60ab1fee66f031134edb56ba9b7de27336402039e0d0bd53241c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banzai, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:56:55 compute-0 systemd[1]: Started libpod-conmon-f814a483a60ab1fee66f031134edb56ba9b7de27336402039e0d0bd53241c83e.scope.
Dec  3 18:56:55 compute-0 podman[440319]: 2025-12-03 18:56:55.72762167 +0000 UTC m=+0.037011034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:56:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5066a3913d29716922b0a8922848785c7aba46972e1bf6b1e7e02a60f14791c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5066a3913d29716922b0a8922848785c7aba46972e1bf6b1e7e02a60f14791c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5066a3913d29716922b0a8922848785c7aba46972e1bf6b1e7e02a60f14791c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5066a3913d29716922b0a8922848785c7aba46972e1bf6b1e7e02a60f14791c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:56:55 compute-0 podman[440319]: 2025-12-03 18:56:55.873555004 +0000 UTC m=+0.182944388 container init f814a483a60ab1fee66f031134edb56ba9b7de27336402039e0d0bd53241c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banzai, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 18:56:55 compute-0 podman[440319]: 2025-12-03 18:56:55.899708152 +0000 UTC m=+0.209097516 container start f814a483a60ab1fee66f031134edb56ba9b7de27336402039e0d0bd53241c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banzai, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:56:55 compute-0 podman[440319]: 2025-12-03 18:56:55.906331364 +0000 UTC m=+0.215720738 container attach f814a483a60ab1fee66f031134edb56ba9b7de27336402039e0d0bd53241c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 18:56:56 compute-0 ovn_controller[89305]: 2025-12-03T18:56:56Z|00065|memory_trim|INFO|Detected inactivity (last active 30023 ms ago): trimming memory
Dec  3 18:56:56 compute-0 admiring_banzai[440335]: {
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "osd_id": 1,
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "type": "bluestore"
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:    },
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "osd_id": 2,
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "type": "bluestore"
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:    },
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "osd_id": 0,
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:        "type": "bluestore"
Dec  3 18:56:56 compute-0 admiring_banzai[440335]:    }
Dec  3 18:56:56 compute-0 admiring_banzai[440335]: }
Dec  3 18:56:56 compute-0 systemd[1]: libpod-f814a483a60ab1fee66f031134edb56ba9b7de27336402039e0d0bd53241c83e.scope: Deactivated successfully.
Dec  3 18:56:56 compute-0 podman[440319]: 2025-12-03 18:56:56.948811376 +0000 UTC m=+1.258200750 container died f814a483a60ab1fee66f031134edb56ba9b7de27336402039e0d0bd53241c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:56:56 compute-0 systemd[1]: libpod-f814a483a60ab1fee66f031134edb56ba9b7de27336402039e0d0bd53241c83e.scope: Consumed 1.050s CPU time.
Dec  3 18:56:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5066a3913d29716922b0a8922848785c7aba46972e1bf6b1e7e02a60f14791c-merged.mount: Deactivated successfully.
Dec  3 18:56:57 compute-0 podman[440319]: 2025-12-03 18:56:57.026012801 +0000 UTC m=+1.335402175 container remove f814a483a60ab1fee66f031134edb56ba9b7de27336402039e0d0bd53241c83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banzai, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:56:57 compute-0 systemd[1]: libpod-conmon-f814a483a60ab1fee66f031134edb56ba9b7de27336402039e0d0bd53241c83e.scope: Deactivated successfully.
Dec  3 18:56:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:56:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:56:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:56:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:56:57 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 705a9c95-c16b-45da-a9a1-c83bb6f6b782 does not exist
Dec  3 18:56:57 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 66dd5189-3b3b-4df7-8553-a32a58d2dfcd does not exist
Dec  3 18:56:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:58 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:56:58 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:56:58 compute-0 nova_compute[348325]: 2025-12-03 18:56:58.305 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:56:58 compute-0 podman[440429]: 2025-12-03 18:56:58.932084297 +0000 UTC m=+0.099436099 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 18:56:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:56:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:56:59 compute-0 podman[158200]: time="2025-12-03T18:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:56:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 18:56:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8175 "" "Go-http-client/1.1"
Dec  3 18:57:00 compute-0 nova_compute[348325]: 2025-12-03 18:57:00.443 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:01 compute-0 openstack_network_exporter[365222]: ERROR   18:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:57:01 compute-0 openstack_network_exporter[365222]: ERROR   18:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:57:01 compute-0 openstack_network_exporter[365222]: ERROR   18:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:57:01 compute-0 openstack_network_exporter[365222]: ERROR   18:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:57:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:57:01 compute-0 openstack_network_exporter[365222]: ERROR   18:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:57:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:57:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:57:01 compute-0 podman[440452]: 2025-12-03 18:57:01.976756471 +0000 UTC m=+0.123328832 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:57:02 compute-0 podman[440451]: 2025-12-03 18:57:02.010554207 +0000 UTC m=+0.161244908 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:57:03 compute-0 nova_compute[348325]: 2025-12-03 18:57:03.264 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:03 compute-0 nova_compute[348325]: 2025-12-03 18:57:03.307 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:57:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:04 compute-0 nova_compute[348325]: 2025-12-03 18:57:04.708 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:05 compute-0 nova_compute[348325]: 2025-12-03 18:57:05.447 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:57:05 compute-0 nova_compute[348325]: 2025-12-03 18:57:05.715 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:06 compute-0 nova_compute[348325]: 2025-12-03 18:57:06.305 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:57:08 compute-0 nova_compute[348325]: 2025-12-03 18:57:08.311 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:57:10 compute-0 nova_compute[348325]: 2025-12-03 18:57:10.452 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:57:12 compute-0 nova_compute[348325]: 2025-12-03 18:57:12.744 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:12 compute-0 nova_compute[348325]: 2025-12-03 18:57:12.893 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.252 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.253 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.253 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff9026e270>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.258 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.259 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.259 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.259 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.259 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.259 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.260 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.260 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.260 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.260 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.261 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.261 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.261 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.261 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.262 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.262 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.262 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.262 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.262 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.263 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.263 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.263 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.263 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.263 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.264 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:57:13.265 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:57:13 compute-0 nova_compute[348325]: 2025-12-03 18:57:13.314 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:13 compute-0 nova_compute[348325]: 2025-12-03 18:57:13.501 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:57:13 compute-0 nova_compute[348325]: 2025-12-03 18:57:13.679 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:57:13 compute-0 nova_compute[348325]: 2025-12-03 18:57:13.920 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:57:13
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'volumes', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:57:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:57:14 compute-0 nova_compute[348325]: 2025-12-03 18:57:14.193 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:57:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:57:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:57:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:57:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:57:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:57:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:57:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:57:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:57:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:57:14 compute-0 nova_compute[348325]: 2025-12-03 18:57:14.650 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:14 compute-0 podman[440495]: 2025-12-03 18:57:14.840966849 +0000 UTC m=+0.117276743 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:57:14 compute-0 podman[440496]: 2025-12-03 18:57:14.851409745 +0000 UTC m=+0.125053014 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 18:57:14 compute-0 podman[440497]: 2025-12-03 18:57:14.861356428 +0000 UTC m=+0.128847537 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal)
Dec  3 18:57:15 compute-0 nova_compute[348325]: 2025-12-03 18:57:15.455 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:57:17 compute-0 nova_compute[348325]: 2025-12-03 18:57:17.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:57:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:57:17 compute-0 nova_compute[348325]: 2025-12-03 18:57:17.825 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Acquiring lock "67a42a04-754c-489b-9aeb-12d68487d4d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:17 compute-0 nova_compute[348325]: 2025-12-03 18:57:17.825 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:17 compute-0 nova_compute[348325]: 2025-12-03 18:57:17.841 348329 DEBUG nova.compute.manager [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:57:17 compute-0 podman[440556]: 2025-12-03 18:57:17.940777142 +0000 UTC m=+0.084826951 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Dec  3 18:57:17 compute-0 podman[440555]: 2025-12-03 18:57:17.942433833 +0000 UTC m=+0.097970233 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  3 18:57:17 compute-0 podman[440554]: 2025-12-03 18:57:17.957353377 +0000 UTC m=+0.107452935 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, container_name=kepler, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 18:57:17 compute-0 nova_compute[348325]: 2025-12-03 18:57:17.991 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:17 compute-0 nova_compute[348325]: 2025-12-03 18:57:17.992 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.001 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.001 348329 INFO nova.compute.claims [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.091 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.317 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:57:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:57:18 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247582862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.528 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.541 348329 DEBUG nova.compute.provider_tree [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.567 348329 DEBUG nova.scheduler.client.report [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.598 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.600 348329 DEBUG nova.compute.manager [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.667 348329 DEBUG nova.compute.manager [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.668 348329 DEBUG nova.network.neutron [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.695 348329 INFO nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.726 348329 DEBUG nova.compute.manager [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.843 348329 DEBUG nova.compute.manager [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.845 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.846 348329 INFO nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Creating image(s)#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.894 348329 DEBUG nova.storage.rbd_utils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] rbd image 67a42a04-754c-489b-9aeb-12d68487d4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.935 348329 DEBUG nova.storage.rbd_utils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] rbd image 67a42a04-754c-489b-9aeb-12d68487d4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:18 compute-0 nova_compute[348325]: 2025-12-03 18:57:18.994 348329 DEBUG nova.storage.rbd_utils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] rbd image 67a42a04-754c-489b-9aeb-12d68487d4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:19 compute-0 nova_compute[348325]: 2025-12-03 18:57:19.009 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Acquiring lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:19 compute-0 nova_compute[348325]: 2025-12-03 18:57:19.011 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:19 compute-0 nova_compute[348325]: 2025-12-03 18:57:19.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:57:19 compute-0 nova_compute[348325]: 2025-12-03 18:57:19.498 348329 DEBUG nova.virt.libvirt.imagebackend [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Image locations are: [{'url': 'rbd://c1caf3ba-b2a5-5005-a11e-e955c344dccc/images/55982930-937b-484e-96ee-69e406a48023/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c1caf3ba-b2a5-5005-a11e-e955c344dccc/images/55982930-937b-484e-96ee-69e406a48023/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  3 18:57:19 compute-0 nova_compute[348325]: 2025-12-03 18:57:19.654 348329 DEBUG nova.policy [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d49b8a0584445d09f42f33a803d4dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eda31966af554b3b92f3e55bf4c324c2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 18:57:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  3 18:57:20 compute-0 nova_compute[348325]: 2025-12-03 18:57:20.002 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:20 compute-0 nova_compute[348325]: 2025-12-03 18:57:20.459 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:20 compute-0 nova_compute[348325]: 2025-12-03 18:57:20.618 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:57:20 compute-0 nova_compute[348325]: 2025-12-03 18:57:20.842 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:20 compute-0 nova_compute[348325]: 2025-12-03 18:57:20.951 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864.part --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:20 compute-0 nova_compute[348325]: 2025-12-03 18:57:20.953 348329 DEBUG nova.virt.images [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] 55982930-937b-484e-96ee-69e406a48023 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  3 18:57:20 compute-0 nova_compute[348325]: 2025-12-03 18:57:20.954 348329 DEBUG nova.privsep.utils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  3 18:57:20 compute-0 nova_compute[348325]: 2025-12-03 18:57:20.954 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864.part /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:21 compute-0 nova_compute[348325]: 2025-12-03 18:57:21.257 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864.part /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864.converted" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:21 compute-0 nova_compute[348325]: 2025-12-03 18:57:21.264 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:21 compute-0 nova_compute[348325]: 2025-12-03 18:57:21.357 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864.converted --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:21 compute-0 nova_compute[348325]: 2025-12-03 18:57:21.358 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:21 compute-0 nova_compute[348325]: 2025-12-03 18:57:21.392 348329 DEBUG nova.storage.rbd_utils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] rbd image 67a42a04-754c-489b-9aeb-12d68487d4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:21 compute-0 nova_compute[348325]: 2025-12-03 18:57:21.399 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 67a42a04-754c-489b-9aeb-12d68487d4d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:21 compute-0 nova_compute[348325]: 2025-12-03 18:57:21.422 348329 DEBUG nova.network.neutron [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Successfully created port: 856126a0-9e4c-43b6-9e00-a5fade4f2abf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 18:57:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 683 KiB/s rd, 0 op/s
Dec  3 18:57:21 compute-0 nova_compute[348325]: 2025-12-03 18:57:21.844 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 67a42a04-754c-489b-9aeb-12d68487d4d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:21 compute-0 nova_compute[348325]: 2025-12-03 18:57:21.961 348329 DEBUG nova.storage.rbd_utils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] resizing rbd image 67a42a04-754c-489b-9aeb-12d68487d4d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.126 348329 DEBUG nova.network.neutron [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Successfully updated port: 856126a0-9e4c-43b6-9e00-a5fade4f2abf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.138 348329 DEBUG nova.objects.instance [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 67a42a04-754c-489b-9aeb-12d68487d4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.143 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Acquiring lock "refresh_cache-67a42a04-754c-489b-9aeb-12d68487d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.143 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Acquired lock "refresh_cache-67a42a04-754c-489b-9aeb-12d68487d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.144 348329 DEBUG nova.network.neutron [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.171 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.172 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Ensure instance console log exists: /var/lib/nova/instances/67a42a04-754c-489b-9aeb-12d68487d4d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.172 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.173 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.173 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.382 348329 DEBUG nova.network.neutron [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.504 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.505 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 18:57:22 compute-0 nova_compute[348325]: 2025-12-03 18:57:22.505 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.182 348329 DEBUG nova.network.neutron [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Updating instance_info_cache with network_info: [{"id": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "address": "fa:16:3e:38:0a:cb", "network": {"id": "59ddf46a-73fc-4bab-9c16-51c1e99fd6f1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-549721878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda31966af554b3b92f3e55bf4c324c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856126a0-9e", "ovs_interfaceid": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.206 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Releasing lock "refresh_cache-67a42a04-754c-489b-9aeb-12d68487d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.207 348329 DEBUG nova.compute.manager [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Instance network_info: |[{"id": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "address": "fa:16:3e:38:0a:cb", "network": {"id": "59ddf46a-73fc-4bab-9c16-51c1e99fd6f1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-549721878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda31966af554b3b92f3e55bf4c324c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856126a0-9e", "ovs_interfaceid": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.210 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Start _get_guest_xml network_info=[{"id": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "address": "fa:16:3e:38:0a:cb", "network": {"id": "59ddf46a-73fc-4bab-9c16-51c1e99fd6f1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-549721878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda31966af554b3b92f3e55bf4c324c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856126a0-9e", "ovs_interfaceid": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '55982930-937b-484e-96ee-69e406a48023'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.218 348329 WARNING nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.226 348329 DEBUG nova.virt.libvirt.host [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.227 348329 DEBUG nova.virt.libvirt.host [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.234 348329 DEBUG nova.virt.libvirt.host [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.235 348329 DEBUG nova.virt.libvirt.host [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.235 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.236 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:56:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a94cfbfb-a20a-4689-ac91-e7436db75880',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.237 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.237 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.238 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.238 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.239 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.239 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.240 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.241 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.241 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.242 348329 DEBUG nova.virt.hardware [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.246 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.319 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:23.353 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:23.354 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:23.354 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 65 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 396 KiB/s wr, 7 op/s
Dec  3 18:57:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:57:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359983426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.768 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.804 348329 DEBUG nova.storage.rbd_utils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] rbd image 67a42a04-754c-489b-9aeb-12d68487d4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:23 compute-0 nova_compute[348325]: 2025-12-03 18:57:23.812 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.089 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.090 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.113 348329 DEBUG nova.compute.manager [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.196 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.197 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.204 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.205 348329 INFO nova.compute.claims [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:57:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:57:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456598955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.259 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.261 348329 DEBUG nova.virt.libvirt.vif [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:57:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1180311471',display_name='tempest-ServerAddressesTestJSON-server-1180311471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1180311471',id=6,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eda31966af554b3b92f3e55bf4c324c2',ramdisk_id='',reservation_id='r-0hmdbmdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-939673438',owner_user_name='tempest-ServerAddressesTestJSON-939673438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:57:18Z,user_data=None,user_id='0d49b8a0584445d09f42f33a803d4dfe',uuid=67a42a04-754c-489b-9aeb-12d68487d4d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "address": "fa:16:3e:38:0a:cb", "network": {"id": "59ddf46a-73fc-4bab-9c16-51c1e99fd6f1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-549721878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda31966af554b3b92f3e55bf4c324c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856126a0-9e", "ovs_interfaceid": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.261 348329 DEBUG nova.network.os_vif_util [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Converting VIF {"id": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "address": "fa:16:3e:38:0a:cb", "network": {"id": "59ddf46a-73fc-4bab-9c16-51c1e99fd6f1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-549721878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda31966af554b3b92f3e55bf4c324c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856126a0-9e", "ovs_interfaceid": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.262 348329 DEBUG nova.network.os_vif_util [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:0a:cb,bridge_name='br-int',has_traffic_filtering=True,id=856126a0-9e4c-43b6-9e00-a5fade4f2abf,network=Network(59ddf46a-73fc-4bab-9c16-51c1e99fd6f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856126a0-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.263 348329 DEBUG nova.objects.instance [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 67a42a04-754c-489b-9aeb-12d68487d4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.278 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:57:24 compute-0 nova_compute[348325]:  <uuid>67a42a04-754c-489b-9aeb-12d68487d4d9</uuid>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  <name>instance-00000006</name>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  <memory>131072</memory>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <nova:name>tempest-ServerAddressesTestJSON-server-1180311471</nova:name>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:57:23</nova:creationTime>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <nova:flavor name="m1.nano">
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <nova:memory>128</nova:memory>
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <nova:user uuid="0d49b8a0584445d09f42f33a803d4dfe">tempest-ServerAddressesTestJSON-939673438-project-member</nova:user>
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <nova:project uuid="eda31966af554b3b92f3e55bf4c324c2">tempest-ServerAddressesTestJSON-939673438</nova:project>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="55982930-937b-484e-96ee-69e406a48023"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <nova:port uuid="856126a0-9e4c-43b6-9e00-a5fade4f2abf">
Dec  3 18:57:24 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <system>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <entry name="serial">67a42a04-754c-489b-9aeb-12d68487d4d9</entry>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <entry name="uuid">67a42a04-754c-489b-9aeb-12d68487d4d9</entry>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    </system>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  <os>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  </os>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  <features>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  </features>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/67a42a04-754c-489b-9aeb-12d68487d4d9_disk">
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      </source>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/67a42a04-754c-489b-9aeb-12d68487d4d9_disk.config">
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      </source>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:57:24 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:38:0a:cb"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <target dev="tap856126a0-9e"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/67a42a04-754c-489b-9aeb-12d68487d4d9/console.log" append="off"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <video>
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    </video>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:57:24 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:57:24 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:57:24 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:57:24 compute-0 nova_compute[348325]: </domain>
Dec  3 18:57:24 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.279 348329 DEBUG nova.compute.manager [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Preparing to wait for external event network-vif-plugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.279 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Acquiring lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.280 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.280 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.281 348329 DEBUG nova.virt.libvirt.vif [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:57:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1180311471',display_name='tempest-ServerAddressesTestJSON-server-1180311471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1180311471',id=6,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eda31966af554b3b92f3e55bf4c324c2',ramdisk_id='',reservation_id='r-0hmdbmdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-939673438',owner_user_name='tempest-ServerAddressesTestJSON-939673438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:57:18Z,user_data=None,user_id='0d49b8a0584445d09f42f33a803d4dfe',uuid=67a42a04-754c-489b-9aeb-12d68487d4d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "address": "fa:16:3e:38:0a:cb", "network": {"id": "59ddf46a-73fc-4bab-9c16-51c1e99fd6f1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-549721878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda31966af554b3b92f3e55bf4c324c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856126a0-9e", "ovs_interfaceid": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.281 348329 DEBUG nova.network.os_vif_util [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Converting VIF {"id": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "address": "fa:16:3e:38:0a:cb", "network": {"id": "59ddf46a-73fc-4bab-9c16-51c1e99fd6f1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-549721878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda31966af554b3b92f3e55bf4c324c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856126a0-9e", "ovs_interfaceid": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.281 348329 DEBUG nova.network.os_vif_util [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:0a:cb,bridge_name='br-int',has_traffic_filtering=True,id=856126a0-9e4c-43b6-9e00-a5fade4f2abf,network=Network(59ddf46a-73fc-4bab-9c16-51c1e99fd6f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856126a0-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.282 348329 DEBUG os_vif [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:0a:cb,bridge_name='br-int',has_traffic_filtering=True,id=856126a0-9e4c-43b6-9e00-a5fade4f2abf,network=Network(59ddf46a-73fc-4bab-9c16-51c1e99fd6f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856126a0-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.282 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.283 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.283 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.288 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.288 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap856126a0-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.289 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap856126a0-9e, col_values=(('external_ids', {'iface-id': '856126a0-9e4c-43b6-9e00-a5fade4f2abf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:0a:cb', 'vm-uuid': '67a42a04-754c-489b-9aeb-12d68487d4d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:24 compute-0 NetworkManager[49087]: <info>  [1764788244.2927] manager: (tap856126a0-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.294 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.305 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.307 348329 INFO os_vif [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:0a:cb,bridge_name='br-int',has_traffic_filtering=True,id=856126a0-9e4c-43b6-9e00-a5fade4f2abf,network=Network(59ddf46a-73fc-4bab-9c16-51c1e99fd6f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856126a0-9e')#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.357 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.400 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.401 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.402 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] No VIF found with MAC fa:16:3e:38:0a:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.403 348329 INFO nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Using config drive#033[00m
Dec  3 18:57:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.443 348329 DEBUG nova.storage.rbd_utils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] rbd image 67a42a04-754c-489b-9aeb-12d68487d4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.567294230644006e-05 of space, bias 1.0, pg target 0.022701882691932018 quantized to 32 (current 32)
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:57:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:57:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:57:24 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185170701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.795 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.803 348329 DEBUG nova.compute.provider_tree [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.822 348329 DEBUG nova.scheduler.client.report [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.846 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.847 348329 DEBUG nova.compute.manager [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.897 348329 DEBUG nova.compute.manager [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.898 348329 DEBUG nova.network.neutron [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.923 348329 INFO nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:57:24 compute-0 nova_compute[348325]: 2025-12-03 18:57:24.944 348329 DEBUG nova.compute.manager [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.062 348329 DEBUG nova.compute.manager [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.065 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.066 348329 INFO nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Creating image(s)#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.113 348329 DEBUG nova.storage.rbd_utils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] rbd image eff2304f-0e67-4c93-ae65-20d4ddb87625_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.163 348329 DEBUG nova.storage.rbd_utils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] rbd image eff2304f-0e67-4c93-ae65-20d4ddb87625_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.214 348329 DEBUG nova.storage.rbd_utils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] rbd image eff2304f-0e67-4c93-ae65-20d4ddb87625_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.223 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.252 348329 DEBUG nova.policy [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a7a79cf3930c41baa4cb453d75b59c70', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b1bc217751704d588f690e1b293cade8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.258 348329 INFO nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Creating config drive at /var/lib/nova/instances/67a42a04-754c-489b-9aeb-12d68487d4d9/disk.config#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.265 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/67a42a04-754c-489b-9aeb-12d68487d4d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf1_orw5u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.314 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.315 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquiring lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.316 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.316 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.356 348329 DEBUG nova.storage.rbd_utils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] rbd image eff2304f-0e67-4c93-ae65-20d4ddb87625_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.365 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 eff2304f-0e67-4c93-ae65-20d4ddb87625_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.398 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/67a42a04-754c-489b-9aeb-12d68487d4d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf1_orw5u" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.448 348329 DEBUG nova.storage.rbd_utils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] rbd image 67a42a04-754c-489b-9aeb-12d68487d4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.460 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/67a42a04-754c-489b-9aeb-12d68487d4d9/disk.config 67a42a04-754c-489b-9aeb-12d68487d4d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.499 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.502 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:57:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 90 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 31 op/s
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.754 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 eff2304f-0e67-4c93-ae65-20d4ddb87625_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.814 348329 DEBUG oslo_concurrency.processutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/67a42a04-754c-489b-9aeb-12d68487d4d9/disk.config 67a42a04-754c-489b-9aeb-12d68487d4d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.816 348329 INFO nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Deleting local config drive /var/lib/nova/instances/67a42a04-754c-489b-9aeb-12d68487d4d9/disk.config because it was imported into RBD.#033[00m
Dec  3 18:57:25 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 18:57:25 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.878 348329 DEBUG nova.compute.manager [req-e2991b0e-85de-46ff-ba13-945053e71d11 req-a2b5fc88-d61b-4e04-b351-cc91f4577f6e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Received event network-changed-856126a0-9e4c-43b6-9e00-a5fade4f2abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.879 348329 DEBUG nova.compute.manager [req-e2991b0e-85de-46ff-ba13-945053e71d11 req-a2b5fc88-d61b-4e04-b351-cc91f4577f6e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Refreshing instance network info cache due to event network-changed-856126a0-9e4c-43b6-9e00-a5fade4f2abf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.879 348329 DEBUG oslo_concurrency.lockutils [req-e2991b0e-85de-46ff-ba13-945053e71d11 req-a2b5fc88-d61b-4e04-b351-cc91f4577f6e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-67a42a04-754c-489b-9aeb-12d68487d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.880 348329 DEBUG oslo_concurrency.lockutils [req-e2991b0e-85de-46ff-ba13-945053e71d11 req-a2b5fc88-d61b-4e04-b351-cc91f4577f6e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-67a42a04-754c-489b-9aeb-12d68487d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.880 348329 DEBUG nova.network.neutron [req-e2991b0e-85de-46ff-ba13-945053e71d11 req-a2b5fc88-d61b-4e04-b351-cc91f4577f6e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Refreshing network info cache for port 856126a0-9e4c-43b6-9e00-a5fade4f2abf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.891 348329 DEBUG nova.storage.rbd_utils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] resizing rbd image eff2304f-0e67-4c93-ae65-20d4ddb87625_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:57:25 compute-0 kernel: tap856126a0-9e: entered promiscuous mode
Dec  3 18:57:25 compute-0 NetworkManager[49087]: <info>  [1764788245.9338] manager: (tap856126a0-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Dec  3 18:57:25 compute-0 ovn_controller[89305]: 2025-12-03T18:57:25Z|00066|binding|INFO|Claiming lport 856126a0-9e4c-43b6-9e00-a5fade4f2abf for this chassis.
Dec  3 18:57:25 compute-0 ovn_controller[89305]: 2025-12-03T18:57:25Z|00067|binding|INFO|856126a0-9e4c-43b6-9e00-a5fade4f2abf: Claiming fa:16:3e:38:0a:cb 10.100.0.3
Dec  3 18:57:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:25.944 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:0a:cb 10.100.0.3'], port_security=['fa:16:3e:38:0a:cb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '67a42a04-754c-489b-9aeb-12d68487d4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eda31966af554b3b92f3e55bf4c324c2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '082dee80-d213-44eb-9d8e-4eef7ebaf4fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb886f75-8d99-4958-8b8c-820bcf2c4689, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=856126a0-9e4c-43b6-9e00-a5fade4f2abf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:57:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:25.945 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 856126a0-9e4c-43b6-9e00-a5fade4f2abf in datapath 59ddf46a-73fc-4bab-9c16-51c1e99fd6f1 bound to our chassis#033[00m
Dec  3 18:57:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:25.949 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 59ddf46a-73fc-4bab-9c16-51c1e99fd6f1#033[00m
Dec  3 18:57:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:25.960 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f991cb64-b7b9-4415-9b55-6c4348aea73a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:25.962 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap59ddf46a-71 in ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 18:57:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:25.963 411759 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap59ddf46a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 18:57:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:25.964 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3d5099-4d9b-4ad0-ab4f-783d5f46d01e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:25.965 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f80d6c-52b9-4373-bc95-479000b28f26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:25 compute-0 ovn_controller[89305]: 2025-12-03T18:57:25Z|00068|binding|INFO|Setting lport 856126a0-9e4c-43b6-9e00-a5fade4f2abf up in Southbound
Dec  3 18:57:25 compute-0 ovn_controller[89305]: 2025-12-03T18:57:25Z|00069|binding|INFO|Setting lport 856126a0-9e4c-43b6-9e00-a5fade4f2abf ovn-installed in OVS
Dec  3 18:57:25 compute-0 systemd-udevd[441130]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:57:25 compute-0 nova_compute[348325]: 2025-12-03 18:57:25.977 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:25 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:25.980 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[71417cfc-fcda-4870-b1d2-5f7cfea97c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:25 compute-0 systemd-machined[138702]: New machine qemu-6-instance-00000006.
Dec  3 18:57:25 compute-0 NetworkManager[49087]: <info>  [1764788245.9969] device (tap856126a0-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:57:25 compute-0 NetworkManager[49087]: <info>  [1764788245.9981] device (tap856126a0-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:57:26 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.012 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[20b808cd-5b53-473e-ab55-e9ccb9218f6c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.045 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[e02fd072-f0ac-4a22-bcd3-899befdcfcd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 NetworkManager[49087]: <info>  [1764788246.0590] manager: (tap59ddf46a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.058 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[aee74c96-6fee-49b1-bb9c-be28fa2cd403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 systemd-udevd[441135]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.093 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[7eba4905-1ed7-46c8-b3a3-6df46d623cf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.097 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[ab156369-bcd0-45f9-86e5-cfc9fcedca36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 NetworkManager[49087]: <info>  [1764788246.1241] device (tap59ddf46a-70): carrier: link connected
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.130 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[831e4fc5-4c17-4445-8740-2e5be53401b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.140 348329 DEBUG nova.objects.instance [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lazy-loading 'migration_context' on Instance uuid eff2304f-0e67-4c93-ae65-20d4ddb87625 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.148 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[9510145e-e384-465f-8e1c-ff0aa44c6b5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap59ddf46a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:b0:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652606, 'reachable_time': 26932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 441183, 'error': None, 'target': 'ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.162 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.162 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Ensure instance console log exists: /var/lib/nova/instances/eff2304f-0e67-4c93-ae65-20d4ddb87625/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.163 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.163 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.163 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.167 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[8b89bec7-741f-4d2f-8ad7-6b72d2d86e64]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedf:b0bd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652606, 'tstamp': 652606}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 441184, 'error': None, 'target': 'ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.186 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d10be9-0523-4d1d-92db-182062fa3c7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap59ddf46a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:b0:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652606, 'reachable_time': 26932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 441185, 'error': None, 'target': 'ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.220 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed91002-1517-406c-b8bc-909a28b9bd75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.282 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae86a9b-da94-4135-8e78-94825481dacb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.284 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59ddf46a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.284 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.285 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59ddf46a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:26 compute-0 NetworkManager[49087]: <info>  [1764788246.2880] manager: (tap59ddf46a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Dec  3 18:57:26 compute-0 kernel: tap59ddf46a-70: entered promiscuous mode
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.287 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.292 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap59ddf46a-70, col_values=(('external_ids', {'iface-id': 'bb1f708f-3b4b-4cce-acdd-ffcbf5646f27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.293 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:26 compute-0 ovn_controller[89305]: 2025-12-03T18:57:26Z|00070|binding|INFO|Releasing lport bb1f708f-3b4b-4cce-acdd-ffcbf5646f27 from this chassis (sb_readonly=0)
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.314 286999 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/59ddf46a-73fc-4bab-9c16-51c1e99fd6f1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/59ddf46a-73fc-4bab-9c16-51c1e99fd6f1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.315 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[d75c8265-b825-4721-874c-61174b1da867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.316 286999 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: global
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    log         /dev/log local0 debug
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    log-tag     haproxy-metadata-proxy-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    user        root
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    group       root
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    maxconn     1024
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    pidfile     /var/lib/neutron/external/pids/59ddf46a-73fc-4bab-9c16-51c1e99fd6f1.pid.haproxy
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    daemon
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: defaults
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    log global
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    mode http
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    option httplog
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    option dontlognull
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    option http-server-close
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    option forwardfor
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    retries                 3
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    timeout http-request    30s
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    timeout connect         30s
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    timeout client          32s
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    timeout server          32s
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    timeout http-keep-alive 30s
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: listen listener
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    bind 169.254.169.254:80
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]:    http-request add-header X-OVN-Network-ID 59ddf46a-73fc-4bab-9c16-51c1e99fd6f1
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.316 286999 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1', 'env', 'PROCESS_TAG=haproxy-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/59ddf46a-73fc-4bab-9c16-51c1e99fd6f1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.317 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:26 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  3 18:57:26 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  3 18:57:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:26.397 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.398 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.406 348329 DEBUG nova.network.neutron [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Successfully created port: b709b4ab-585a-4aed-9f06-3c9650d54c09 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.513 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Acquiring lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.522 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.545 348329 DEBUG nova.compute.manager [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.562 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788246.5619266, 67a42a04-754c-489b-9aeb-12d68487d4d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.563 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] VM Started (Lifecycle Event)#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.597 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.603 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788246.5620646, 67a42a04-754c-489b-9aeb-12d68487d4d9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.603 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] VM Paused (Lifecycle Event)#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.619 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.625 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.627 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.627 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.636 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.636 348329 INFO nova.compute.claims [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.642 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Acquiring lock "47c940fc-9b39-48b6-a183-42c0547ac964" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.642 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.643 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.663 348329 DEBUG nova.compute.manager [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.731 348329 DEBUG nova.scheduler.client.report [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Refreshing inventories for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.752 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.753 348329 DEBUG nova.scheduler.client.report [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Updating ProviderTree inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.753 348329 DEBUG nova.compute.provider_tree [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.768 348329 DEBUG nova.scheduler.client.report [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Refreshing aggregate associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.788 348329 DEBUG nova.scheduler.client.report [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Refreshing trait associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, traits: COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_AVX2,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 18:57:26 compute-0 podman[441278]: 2025-12-03 18:57:26.793384021 +0000 UTC m=+0.084859203 container create bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:57:26 compute-0 podman[441278]: 2025-12-03 18:57:26.750020742 +0000 UTC m=+0.041495934 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:57:26 compute-0 systemd[1]: Started libpod-conmon-bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e.scope.
Dec  3 18:57:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:57:26 compute-0 nova_compute[348325]: 2025-12-03 18:57:26.910 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d9fde7bf1872c9e512fdce77a21415bf6168ff113654fcdd9f984f763a5bbe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 18:57:26 compute-0 podman[441278]: 2025-12-03 18:57:26.939435787 +0000 UTC m=+0.230911049 container init bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 18:57:26 compute-0 podman[441278]: 2025-12-03 18:57:26.954254279 +0000 UTC m=+0.245729481 container start bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  3 18:57:26 compute-0 neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1[441293]: [NOTICE]   (441298) : New worker (441300) forked
Dec  3 18:57:26 compute-0 neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1[441293]: [NOTICE]   (441298) : Loading success.
Dec  3 18:57:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:27.018 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:57:27 compute-0 ovn_controller[89305]: 2025-12-03T18:57:27Z|00071|memory|INFO|peak resident set size grew 50% in last 3953.3 seconds, from 16256 kB to 24384 kB
Dec  3 18:57:27 compute-0 ovn_controller[89305]: 2025-12-03T18:57:27Z|00072|memory|INFO|idl-cells-OVN_Southbound:10465 idl-cells-Open_vSwitch:756 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:345 lflow-cache-entries-cache-matches:292 lflow-cache-size-KB:1471 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:609 ofctrl_installed_flow_usage-KB:444 ofctrl_sb_flow_ref_usage-KB:231
Dec  3 18:57:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:57:27 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2753320732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.397 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.410 348329 DEBUG nova.compute.provider_tree [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.428 348329 DEBUG nova.scheduler.client.report [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.444 348329 DEBUG nova.network.neutron [req-e2991b0e-85de-46ff-ba13-945053e71d11 req-a2b5fc88-d61b-4e04-b351-cc91f4577f6e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Updated VIF entry in instance network info cache for port 856126a0-9e4c-43b6-9e00-a5fade4f2abf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.444 348329 DEBUG nova.network.neutron [req-e2991b0e-85de-46ff-ba13-945053e71d11 req-a2b5fc88-d61b-4e04-b351-cc91f4577f6e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Updating instance_info_cache with network_info: [{"id": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "address": "fa:16:3e:38:0a:cb", "network": {"id": "59ddf46a-73fc-4bab-9c16-51c1e99fd6f1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-549721878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda31966af554b3b92f3e55bf4c324c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856126a0-9e", "ovs_interfaceid": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.460 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.461 348329 DEBUG nova.compute.manager [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.466 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.469 348329 DEBUG oslo_concurrency.lockutils [req-e2991b0e-85de-46ff-ba13-945053e71d11 req-a2b5fc88-d61b-4e04-b351-cc91f4577f6e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-67a42a04-754c-489b-9aeb-12d68487d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.479 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.479 348329 INFO nova.compute.claims [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.526 348329 DEBUG nova.compute.manager [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.526 348329 DEBUG nova.network.neutron [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.556 348329 INFO nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.582 348329 DEBUG nova.compute.manager [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:57:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 113 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 35 op/s
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.716 348329 DEBUG nova.compute.manager [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.719 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.720 348329 INFO nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Creating image(s)#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.774 348329 DEBUG nova.storage.rbd_utils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] rbd image 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.830 348329 DEBUG nova.storage.rbd_utils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] rbd image 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.882 348329 DEBUG nova.storage.rbd_utils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] rbd image 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.894 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.928 348329 DEBUG nova.policy [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5d41669fc94f4811803f4ebf54dbcebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '86bd600007a042cea64439c21bd920b0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.937 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.970 348329 DEBUG nova.network.neutron [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Successfully updated port: b709b4ab-585a-4aed-9f06-3c9650d54c09 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.974 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.975 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Acquiring lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.976 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:27 compute-0 nova_compute[348325]: 2025-12-03 18:57:27.976 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.011 348329 DEBUG nova.storage.rbd_utils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] rbd image 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.018 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.042 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquiring lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.042 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquired lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.042 348329 DEBUG nova.network.neutron [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.216 348329 DEBUG nova.compute.manager [req-145c5f5b-a78f-4376-8edc-d501fe7cda49 req-9a1eb116-ab93-43b1-8b24-3f613f2f77f3 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Received event network-vif-plugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.216 348329 DEBUG oslo_concurrency.lockutils [req-145c5f5b-a78f-4376-8edc-d501fe7cda49 req-9a1eb116-ab93-43b1-8b24-3f613f2f77f3 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.216 348329 DEBUG oslo_concurrency.lockutils [req-145c5f5b-a78f-4376-8edc-d501fe7cda49 req-9a1eb116-ab93-43b1-8b24-3f613f2f77f3 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.216 348329 DEBUG oslo_concurrency.lockutils [req-145c5f5b-a78f-4376-8edc-d501fe7cda49 req-9a1eb116-ab93-43b1-8b24-3f613f2f77f3 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.217 348329 DEBUG nova.compute.manager [req-145c5f5b-a78f-4376-8edc-d501fe7cda49 req-9a1eb116-ab93-43b1-8b24-3f613f2f77f3 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Processing event network-vif-plugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.217 348329 DEBUG nova.compute.manager [req-145c5f5b-a78f-4376-8edc-d501fe7cda49 req-9a1eb116-ab93-43b1-8b24-3f613f2f77f3 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Received event network-vif-plugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.217 348329 DEBUG oslo_concurrency.lockutils [req-145c5f5b-a78f-4376-8edc-d501fe7cda49 req-9a1eb116-ab93-43b1-8b24-3f613f2f77f3 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.217 348329 DEBUG oslo_concurrency.lockutils [req-145c5f5b-a78f-4376-8edc-d501fe7cda49 req-9a1eb116-ab93-43b1-8b24-3f613f2f77f3 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.217 348329 DEBUG oslo_concurrency.lockutils [req-145c5f5b-a78f-4376-8edc-d501fe7cda49 req-9a1eb116-ab93-43b1-8b24-3f613f2f77f3 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.217 348329 DEBUG nova.compute.manager [req-145c5f5b-a78f-4376-8edc-d501fe7cda49 req-9a1eb116-ab93-43b1-8b24-3f613f2f77f3 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] No waiting events found dispatching network-vif-plugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.218 348329 WARNING nova.compute.manager [req-145c5f5b-a78f-4376-8edc-d501fe7cda49 req-9a1eb116-ab93-43b1-8b24-3f613f2f77f3 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Received unexpected event network-vif-plugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf for instance with vm_state building and task_state spawning.#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.218 348329 DEBUG nova.compute.manager [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.223 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788248.2227616, 67a42a04-754c-489b-9aeb-12d68487d4d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.224 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.226 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.233 348329 INFO nova.virt.libvirt.driver [-] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Instance spawned successfully.#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.233 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.246 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.255 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.269 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.269 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.270 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.270 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.271 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.271 348329 DEBUG nova.virt.libvirt.driver [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.276 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.322 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.336 348329 INFO nova.compute.manager [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Took 9.49 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.336 348329 DEBUG nova.compute.manager [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.387 348329 INFO nova.compute.manager [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Took 10.43 seconds to build instance.#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.400 348329 DEBUG oslo_concurrency.lockutils [None req-3d519e76-6ae4-4361-9226-bc8196d0bca0 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:57:28 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2844726711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.443 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.493 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.568 348329 DEBUG nova.storage.rbd_utils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] resizing rbd image 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.634 348329 DEBUG nova.compute.provider_tree [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.654 348329 DEBUG nova.scheduler.client.report [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.696 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.697 348329 DEBUG nova.compute.manager [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.780 348329 DEBUG nova.objects.instance [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lazy-loading 'migration_context' on Instance uuid 59c4595c-fa0d-4410-9dda-f266cca0c9e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.792 348329 DEBUG nova.compute.manager [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.793 348329 DEBUG nova.network.neutron [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.801 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.801 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Ensure instance console log exists: /var/lib/nova/instances/59c4595c-fa0d-4410-9dda-f266cca0c9e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.802 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.803 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.803 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.814 348329 DEBUG nova.network.neutron [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.820 348329 INFO nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.857 348329 DEBUG nova.compute.manager [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.995 348329 DEBUG nova.compute.manager [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:57:28 compute-0 nova_compute[348325]: 2025-12-03 18:57:28.998 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.000 348329 INFO nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Creating image(s)#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.049 348329 DEBUG nova.storage.rbd_utils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] rbd image 47c940fc-9b39-48b6-a183-42c0547ac964_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.094 348329 DEBUG nova.storage.rbd_utils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] rbd image 47c940fc-9b39-48b6-a183-42c0547ac964_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.133 348329 DEBUG nova.storage.rbd_utils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] rbd image 47c940fc-9b39-48b6-a183-42c0547ac964_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.140 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.209 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.210 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Acquiring lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.211 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.211 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.239 348329 DEBUG nova.storage.rbd_utils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] rbd image 47c940fc-9b39-48b6-a183-42c0547ac964_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.245 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 47c940fc-9b39-48b6-a183-42c0547ac964_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.292 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.391 348329 DEBUG nova.policy [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd1fe8dd2488b4bf3ab1fb503816c5da9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8356f2a17c1f4ae2a3e07cdcc6e6f6da', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 18:57:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.593 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 47c940fc-9b39-48b6-a183-42c0547ac964_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 137 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 39 op/s
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.708 348329 DEBUG nova.storage.rbd_utils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] resizing rbd image 47c940fc-9b39-48b6-a183-42c0547ac964_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:57:29 compute-0 podman[158200]: time="2025-12-03T18:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:57:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 18:57:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Dec  3 18:57:29 compute-0 podman[441666]: 2025-12-03 18:57:29.899697011 +0000 UTC m=+0.073141917 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.900 348329 DEBUG nova.objects.instance [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lazy-loading 'migration_context' on Instance uuid 47c940fc-9b39-48b6-a183-42c0547ac964 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.916 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.917 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Ensure instance console log exists: /var/lib/nova/instances/47c940fc-9b39-48b6-a183-42c0547ac964/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.917 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.917 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.918 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:29 compute-0 nova_compute[348325]: 2025-12-03 18:57:29.930 348329 DEBUG nova.network.neutron [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Successfully created port: fa1f26e3-cb99-46c5-b405-4fbdc024f8cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.568 348329 DEBUG nova.compute.manager [req-4a16f6a0-1da6-4220-b3d8-fe2168c994a9 req-7cae4484-e4e1-4d03-9180-7ba7220b93a8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-changed-b709b4ab-585a-4aed-9f06-3c9650d54c09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.569 348329 DEBUG nova.compute.manager [req-4a16f6a0-1da6-4220-b3d8-fe2168c994a9 req-7cae4484-e4e1-4d03-9180-7ba7220b93a8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Refreshing instance network info cache due to event network-changed-b709b4ab-585a-4aed-9f06-3c9650d54c09. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.569 348329 DEBUG oslo_concurrency.lockutils [req-4a16f6a0-1da6-4220-b3d8-fe2168c994a9 req-7cae4484-e4e1-4d03-9180-7ba7220b93a8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.635 348329 DEBUG nova.network.neutron [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Updating instance_info_cache with network_info: [{"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.652 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Releasing lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.653 348329 DEBUG nova.compute.manager [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Instance network_info: |[{"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.653 348329 DEBUG oslo_concurrency.lockutils [req-4a16f6a0-1da6-4220-b3d8-fe2168c994a9 req-7cae4484-e4e1-4d03-9180-7ba7220b93a8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.654 348329 DEBUG nova.network.neutron [req-4a16f6a0-1da6-4220-b3d8-fe2168c994a9 req-7cae4484-e4e1-4d03-9180-7ba7220b93a8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Refreshing network info cache for port b709b4ab-585a-4aed-9f06-3c9650d54c09 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.657 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Start _get_guest_xml network_info=[{"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '55982930-937b-484e-96ee-69e406a48023'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.665 348329 WARNING nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.670 348329 DEBUG nova.virt.libvirt.host [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.670 348329 DEBUG nova.virt.libvirt.host [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.674 348329 DEBUG nova.virt.libvirt.host [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.674 348329 DEBUG nova.virt.libvirt.host [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.675 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.675 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:56:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a94cfbfb-a20a-4689-ac91-e7436db75880',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.675 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.675 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.676 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.676 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.676 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.676 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.676 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.676 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.677 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.677 348329 DEBUG nova.virt.hardware [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.679 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.831 348329 DEBUG nova.network.neutron [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Successfully updated port: fa1f26e3-cb99-46c5-b405-4fbdc024f8cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.845 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Acquiring lock "refresh_cache-59c4595c-fa0d-4410-9dda-f266cca0c9e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.845 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Acquired lock "refresh_cache-59c4595c-fa0d-4410-9dda-f266cca0c9e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.845 348329 DEBUG nova.network.neutron [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:57:30 compute-0 nova_compute[348325]: 2025-12-03 18:57:30.891 348329 DEBUG nova.network.neutron [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Successfully created port: df320f97-b085-4528-84d7-d0b7e40923a4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.108 348329 DEBUG nova.network.neutron [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:57:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:57:31 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/503444283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.159 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.205 348329 DEBUG nova.storage.rbd_utils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] rbd image eff2304f-0e67-4c93-ae65-20d4ddb87625_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.216 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:31 compute-0 openstack_network_exporter[365222]: ERROR   18:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:57:31 compute-0 openstack_network_exporter[365222]: ERROR   18:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:57:31 compute-0 openstack_network_exporter[365222]: ERROR   18:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:57:31 compute-0 openstack_network_exporter[365222]: ERROR   18:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:57:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:57:31 compute-0 openstack_network_exporter[365222]: ERROR   18:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:57:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:57:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:57:31 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4004276991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:57:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 214 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.1 MiB/s wr, 123 op/s
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.715 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.717 348329 DEBUG nova.virt.libvirt.vif [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:57:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-348328150',display_name='tempest-ServerActionsTestJSON-server-348328150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-348328150',id=7,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNxroBr01vWxkEduOl12EjIwANWWsGp13Iiwr/acK4U//64QHGHGw2vAnRgMxYbVrlypsXXM19ulgGJI9w1cDLg4nw16FcL2L12/Hkr+U1wJ9evpJospwbYXLvOwZj+bkQ==',key_name='tempest-keypair-1397268451',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b1bc217751704d588f690e1b293cade8',ramdisk_id='',reservation_id='r-2us6ueb7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2101343937',owner_user_name='tempest-ServerActionsTestJSON-2101343937-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:57:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a7a79cf3930c41baa4cb453d75b59c70',uuid=eff2304f-0e67-4c93-ae65-20d4ddb87625,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.718 348329 DEBUG nova.network.os_vif_util [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converting VIF {"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.719 348329 DEBUG nova.network.os_vif_util [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.720 348329 DEBUG nova.objects.instance [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lazy-loading 'pci_devices' on Instance uuid eff2304f-0e67-4c93-ae65-20d4ddb87625 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.746 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:57:31 compute-0 nova_compute[348325]:  <uuid>eff2304f-0e67-4c93-ae65-20d4ddb87625</uuid>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  <name>instance-00000007</name>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  <memory>131072</memory>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <nova:name>tempest-ServerActionsTestJSON-server-348328150</nova:name>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:57:30</nova:creationTime>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <nova:flavor name="m1.nano">
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <nova:memory>128</nova:memory>
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <nova:user uuid="a7a79cf3930c41baa4cb453d75b59c70">tempest-ServerActionsTestJSON-2101343937-project-member</nova:user>
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <nova:project uuid="b1bc217751704d588f690e1b293cade8">tempest-ServerActionsTestJSON-2101343937</nova:project>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="55982930-937b-484e-96ee-69e406a48023"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <nova:port uuid="b709b4ab-585a-4aed-9f06-3c9650d54c09">
Dec  3 18:57:31 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <system>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <entry name="serial">eff2304f-0e67-4c93-ae65-20d4ddb87625</entry>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <entry name="uuid">eff2304f-0e67-4c93-ae65-20d4ddb87625</entry>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    </system>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  <os>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  </os>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  <features>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  </features>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/eff2304f-0e67-4c93-ae65-20d4ddb87625_disk">
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      </source>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/eff2304f-0e67-4c93-ae65-20d4ddb87625_disk.config">
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      </source>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:57:31 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:6e:88:19"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <target dev="tapb709b4ab-58"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/eff2304f-0e67-4c93-ae65-20d4ddb87625/console.log" append="off"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <video>
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    </video>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:57:31 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:57:31 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:57:31 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:57:31 compute-0 nova_compute[348325]: </domain>
Dec  3 18:57:31 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.747 348329 DEBUG nova.compute.manager [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Preparing to wait for external event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.747 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.747 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.747 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.748 348329 DEBUG nova.virt.libvirt.vif [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:57:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-348328150',display_name='tempest-ServerActionsTestJSON-server-348328150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-348328150',id=7,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNxroBr01vWxkEduOl12EjIwANWWsGp13Iiwr/acK4U//64QHGHGw2vAnRgMxYbVrlypsXXM19ulgGJI9w1cDLg4nw16FcL2L12/Hkr+U1wJ9evpJospwbYXLvOwZj+bkQ==',key_name='tempest-keypair-1397268451',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b1bc217751704d588f690e1b293cade8',ramdisk_id='',reservation_id='r-2us6ueb7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2101343937',owner_user_name='tempest-ServerActionsTestJSON-2101343937-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:57:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a7a79cf3930c41baa4cb453d75b59c70',uuid=eff2304f-0e67-4c93-ae65-20d4ddb87625,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.748 348329 DEBUG nova.network.os_vif_util [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converting VIF {"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.748 348329 DEBUG nova.network.os_vif_util [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.749 348329 DEBUG os_vif [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.749 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.750 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.750 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.753 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.753 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb709b4ab-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.753 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb709b4ab-58, col_values=(('external_ids', {'iface-id': 'b709b4ab-585a-4aed-9f06-3c9650d54c09', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:88:19', 'vm-uuid': 'eff2304f-0e67-4c93-ae65-20d4ddb87625'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:31 compute-0 NetworkManager[49087]: <info>  [1764788251.7561] manager: (tapb709b4ab-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.755 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.759 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.765 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.766 348329 INFO os_vif [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58')#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.827 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.827 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.827 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] No VIF found with MAC fa:16:3e:6e:88:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.828 348329 INFO nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Using config drive#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.862 348329 DEBUG nova.storage.rbd_utils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] rbd image eff2304f-0e67-4c93-ae65-20d4ddb87625_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.933 348329 DEBUG nova.compute.manager [req-4b04e203-6e90-440b-9bdc-6ba63596dd93 req-8efda565-ba8e-4d05-bfdc-495e3076af00 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Received event network-changed-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.934 348329 DEBUG nova.compute.manager [req-4b04e203-6e90-440b-9bdc-6ba63596dd93 req-8efda565-ba8e-4d05-bfdc-495e3076af00 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Refreshing instance network info cache due to event network-changed-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:57:31 compute-0 nova_compute[348325]: 2025-12-03 18:57:31.934 348329 DEBUG oslo_concurrency.lockutils [req-4b04e203-6e90-440b-9bdc-6ba63596dd93 req-8efda565-ba8e-4d05-bfdc-495e3076af00 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-59c4595c-fa0d-4410-9dda-f266cca0c9e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:57:32 compute-0 ovn_controller[89305]: 2025-12-03T18:57:32Z|00073|binding|INFO|Releasing lport bb1f708f-3b4b-4cce-acdd-ffcbf5646f27 from this chassis (sb_readonly=0)
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.291 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.483 348329 DEBUG nova.network.neutron [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Successfully updated port: df320f97-b085-4528-84d7-d0b7e40923a4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.510 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.511 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.511 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.511 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.512 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.530 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Acquiring lock "refresh_cache-47c940fc-9b39-48b6-a183-42c0547ac964" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.530 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Acquired lock "refresh_cache-47c940fc-9b39-48b6-a183-42c0547ac964" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.530 348329 DEBUG nova.network.neutron [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.586 348329 INFO nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Creating config drive at /var/lib/nova/instances/eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.config#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.593 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzu9dj8gi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:32 compute-0 ovn_controller[89305]: 2025-12-03T18:57:32Z|00074|binding|INFO|Releasing lport bb1f708f-3b4b-4cce-acdd-ffcbf5646f27 from this chassis (sb_readonly=0)
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.620 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.723 348329 DEBUG nova.network.neutron [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Updating instance_info_cache with network_info: [{"id": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "address": "fa:16:3e:61:69:24", "network": {"id": "6cdaa8da-4e85-47a7-84f8-76fb36b9391a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1996345903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86bd600007a042cea64439c21bd920b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa1f26e3-cb", "ovs_interfaceid": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.725 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzu9dj8gi" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.761 348329 DEBUG nova.storage.rbd_utils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] rbd image eff2304f-0e67-4c93-ae65-20d4ddb87625_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.774 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.config eff2304f-0e67-4c93-ae65-20d4ddb87625_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.792 348329 DEBUG nova.network.neutron [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.796 348329 DEBUG nova.network.neutron [req-4a16f6a0-1da6-4220-b3d8-fe2168c994a9 req-7cae4484-e4e1-4d03-9180-7ba7220b93a8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Updated VIF entry in instance network info cache for port b709b4ab-585a-4aed-9f06-3c9650d54c09. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.796 348329 DEBUG nova.network.neutron [req-4a16f6a0-1da6-4220-b3d8-fe2168c994a9 req-7cae4484-e4e1-4d03-9180-7ba7220b93a8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Updating instance_info_cache with network_info: [{"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.798 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Releasing lock "refresh_cache-59c4595c-fa0d-4410-9dda-f266cca0c9e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.799 348329 DEBUG nova.compute.manager [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Instance network_info: |[{"id": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "address": "fa:16:3e:61:69:24", "network": {"id": "6cdaa8da-4e85-47a7-84f8-76fb36b9391a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1996345903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86bd600007a042cea64439c21bd920b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa1f26e3-cb", "ovs_interfaceid": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.799 348329 DEBUG oslo_concurrency.lockutils [req-4b04e203-6e90-440b-9bdc-6ba63596dd93 req-8efda565-ba8e-4d05-bfdc-495e3076af00 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-59c4595c-fa0d-4410-9dda-f266cca0c9e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.799 348329 DEBUG nova.network.neutron [req-4b04e203-6e90-440b-9bdc-6ba63596dd93 req-8efda565-ba8e-4d05-bfdc-495e3076af00 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Refreshing network info cache for port fa1f26e3-cb99-46c5-b405-4fbdc024f8cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.802 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Start _get_guest_xml network_info=[{"id": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "address": "fa:16:3e:61:69:24", "network": {"id": "6cdaa8da-4e85-47a7-84f8-76fb36b9391a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1996345903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86bd600007a042cea64439c21bd920b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa1f26e3-cb", "ovs_interfaceid": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '55982930-937b-484e-96ee-69e406a48023'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.808 348329 WARNING nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.817 348329 DEBUG oslo_concurrency.lockutils [req-4a16f6a0-1da6-4220-b3d8-fe2168c994a9 req-7cae4484-e4e1-4d03-9180-7ba7220b93a8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.823 348329 DEBUG nova.virt.libvirt.host [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.823 348329 DEBUG nova.virt.libvirt.host [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.838 348329 DEBUG nova.virt.libvirt.host [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.839 348329 DEBUG nova.virt.libvirt.host [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.840 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.840 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:56:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a94cfbfb-a20a-4689-ac91-e7436db75880',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.840 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.840 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.841 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.841 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.841 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.841 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.841 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.842 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.842 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.842 348329 DEBUG nova.virt.hardware [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.845 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:57:32 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2328098845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:57:32 compute-0 podman[441836]: 2025-12-03 18:57:32.945675818 +0000 UTC m=+0.115292386 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.955 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:32 compute-0 podman[441835]: 2025-12-03 18:57:32.96174227 +0000 UTC m=+0.132971447 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.995 348329 DEBUG oslo_concurrency.processutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.config eff2304f-0e67-4c93-ae65-20d4ddb87625_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:32 compute-0 nova_compute[348325]: 2025-12-03 18:57:32.996 348329 INFO nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Deleting local config drive /var/lib/nova/instances/eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.config because it was imported into RBD.#033[00m
Dec  3 18:57:33 compute-0 kernel: tapb709b4ab-58: entered promiscuous mode
Dec  3 18:57:33 compute-0 NetworkManager[49087]: <info>  [1764788253.0483] manager: (tapb709b4ab-58): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Dec  3 18:57:33 compute-0 ovn_controller[89305]: 2025-12-03T18:57:33Z|00075|binding|INFO|Claiming lport b709b4ab-585a-4aed-9f06-3c9650d54c09 for this chassis.
Dec  3 18:57:33 compute-0 ovn_controller[89305]: 2025-12-03T18:57:33Z|00076|binding|INFO|b709b4ab-585a-4aed-9f06-3c9650d54c09: Claiming fa:16:3e:6e:88:19 10.100.0.3
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.057 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.066 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.082 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.082 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.086 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:88:19 10.100.0.3'], port_security=['fa:16:3e:6e:88:19 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'eff2304f-0e67-4c93-ae65-20d4ddb87625', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1bc217751704d588f690e1b293cade8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1a397ab-712e-407d-b87f-48e90c61a0b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f565d4f-7cf7-4751-884a-5071b91cf9b2, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=b709b4ab-585a-4aed-9f06-3c9650d54c09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.087 286999 INFO neutron.agent.ovn.metadata.agent [-] Port b709b4ab-585a-4aed-9f06-3c9650d54c09 in datapath c136d05b-f7ca-4f17-81e0-62c23fcd54a3 bound to our chassis#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.089 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c136d05b-f7ca-4f17-81e0-62c23fcd54a3#033[00m
Dec  3 18:57:33 compute-0 systemd-udevd[441932]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:57:33 compute-0 systemd-machined[138702]: New machine qemu-7-instance-00000007.
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.107 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[28cddcfe-dd41-4523-80e9-027f8e80c638]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.108 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc136d05b-f1 in ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.110 411759 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc136d05b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.110 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[a0deb03e-76c0-4c34-b0ee-ec738decb85d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.114 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6cf83b-bdeb-41ec-b4eb-90726724104a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 NetworkManager[49087]: <info>  [1764788253.1195] device (tapb709b4ab-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:57:33 compute-0 NetworkManager[49087]: <info>  [1764788253.1201] device (tapb709b4ab-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:57:33 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.131 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[86938c57-70e8-446f-a823-af5f06eeab1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.136 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:33 compute-0 ovn_controller[89305]: 2025-12-03T18:57:33Z|00077|binding|INFO|Setting lport b709b4ab-585a-4aed-9f06-3c9650d54c09 ovn-installed in OVS
Dec  3 18:57:33 compute-0 ovn_controller[89305]: 2025-12-03T18:57:33Z|00078|binding|INFO|Setting lport b709b4ab-585a-4aed-9f06-3c9650d54c09 up in Southbound
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.140 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.150 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[67238650-b4bf-479f-a280-60c4113218e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.182 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[c8410ced-2687-4be5-93ca-3785aa0bd179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 NetworkManager[49087]: <info>  [1764788253.1892] manager: (tapc136d05b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.188 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[395bb78f-b89c-49a3-8e3b-aed045e6168d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.217 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.218 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.222 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[05eeb1a1-f8e2-4e87-9c7a-a5b8cbbfe68d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.224 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[5d026bbf-7e0d-45c2-adb9-0e40398eb35d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 NetworkManager[49087]: <info>  [1764788253.2503] device (tapc136d05b-f0): carrier: link connected
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.257 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[22dec97a-3aff-4310-ba95-be9dd06e1331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.283 348329 DEBUG nova.compute.manager [req-3e18dcd6-d9f5-4083-96c0-4767d4f1e02d req-a94ce93a-6508-46e2-b91e-0528b5a6b786 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Received event network-changed-df320f97-b085-4528-84d7-d0b7e40923a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.283 348329 DEBUG nova.compute.manager [req-3e18dcd6-d9f5-4083-96c0-4767d4f1e02d req-a94ce93a-6508-46e2-b91e-0528b5a6b786 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Refreshing instance network info cache due to event network-changed-df320f97-b085-4528-84d7-d0b7e40923a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.283 348329 DEBUG oslo_concurrency.lockutils [req-3e18dcd6-d9f5-4083-96c0-4767d4f1e02d req-a94ce93a-6508-46e2-b91e-0528b5a6b786 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-47c940fc-9b39-48b6-a183-42c0547ac964" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.289 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[6122d860-c14a-4277-b97f-9f33a9c20f21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc136d05b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:79:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653318, 'reachable_time': 36752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 441964, 'error': None, 'target': 'ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.310 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[31274c03-a98d-4f33-8f1e-31bc12b4f9c4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:79cb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653318, 'tstamp': 653318}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 441965, 'error': None, 'target': 'ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:57:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417849826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.325 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.332 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[8718fcf1-1e60-4b18-8514-296c2fbd322e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc136d05b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:79:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653318, 'reachable_time': 36752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 441966, 'error': None, 'target': 'ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.364 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.373 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4178d5-c0ea-4ccc-8045-31698a96f436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.402 348329 DEBUG nova.storage.rbd_utils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] rbd image 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.410 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.426 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9c734c-acd2-4c48-a047-2b66bf611a7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.427 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc136d05b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.428 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.428 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc136d05b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:33 compute-0 kernel: tapc136d05b-f0: entered promiscuous mode
Dec  3 18:57:33 compute-0 NetworkManager[49087]: <info>  [1764788253.4315] manager: (tapc136d05b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.433 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.435 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc136d05b-f0, col_values=(('external_ids', {'iface-id': 'b52268a2-5f2a-45ba-8c23-e32c70c8253f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:33 compute-0 ovn_controller[89305]: 2025-12-03T18:57:33Z|00079|binding|INFO|Releasing lport b52268a2-5f2a-45ba-8c23-e32c70c8253f from this chassis (sb_readonly=0)
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.437 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.440 286999 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c136d05b-f7ca-4f17-81e0-62c23fcd54a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c136d05b-f7ca-4f17-81e0-62c23fcd54a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.441 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[47a31307-dbd8-40ce-b1c0-57f2d8fb4106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.442 286999 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: global
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    log         /dev/log local0 debug
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    log-tag     haproxy-metadata-proxy-c136d05b-f7ca-4f17-81e0-62c23fcd54a3
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    user        root
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    group       root
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    maxconn     1024
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    pidfile     /var/lib/neutron/external/pids/c136d05b-f7ca-4f17-81e0-62c23fcd54a3.pid.haproxy
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    daemon
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: defaults
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    log global
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    mode http
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    option httplog
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    option dontlognull
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    option http-server-close
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    option forwardfor
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    retries                 3
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    timeout http-request    30s
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    timeout connect         30s
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    timeout client          32s
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    timeout server          32s
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    timeout http-keep-alive 30s
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: listen listener
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    bind 169.254.169.254:80
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]:    http-request add-header X-OVN-Network-ID c136d05b-f7ca-4f17-81e0-62c23fcd54a3
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 18:57:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:33.443 286999 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'env', 'PROCESS_TAG=haproxy-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c136d05b-f7ca-4f17-81e0-62c23fcd54a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.450 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.612 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788253.611827, eff2304f-0e67-4c93-ae65-20d4ddb87625 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.612 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] VM Started (Lifecycle Event)#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.643 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.651 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788253.613568, eff2304f-0e67-4c93-ae65-20d4ddb87625 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.651 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] VM Paused (Lifecycle Event)#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.689 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.694 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:57:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 242 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 153 op/s
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.718 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.794 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.795 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3924MB free_disk=59.91630935668945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.795 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.795 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:33 compute-0 podman[442079]: 2025-12-03 18:57:33.890639379 +0000 UTC m=+0.072027029 container create a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 18:57:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:57:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484187968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.929 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 67a42a04-754c-489b-9aeb-12d68487d4d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.930 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance eff2304f-0e67-4c93-ae65-20d4ddb87625 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.931 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 59c4595c-fa0d-4410-9dda-f266cca0c9e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.931 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 47c940fc-9b39-48b6-a183-42c0547ac964 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.931 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.932 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:57:33 compute-0 systemd[1]: Started libpod-conmon-a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8.scope.
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.940 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.941 348329 DEBUG nova.virt.libvirt.vif [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:57:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-409272578',display_name='tempest-ServersTestManualDisk-server-409272578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-409272578',id=8,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyEylQdsOZgqNvI6p6U81aBCMQI9I6HE/rZ64AA6VXtw55ZDq33/c4iUiQRgkwJcFnMLXJcfswV9BTF6Bz2FtXf8FspT1mJN+g5WZ340+UlnXyRTxwyquLZEBQoD68AgA==',key_name='tempest-keypair-230422229',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86bd600007a042cea64439c21bd920b0',ramdisk_id='',reservation_id='r-5t4dy7m4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1695020550',owner_user_name='tempest-ServersTestManualDisk-1695020550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:57:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5d41669fc94f4811803f4ebf54dbcebc',uuid=59c4595c-fa0d-4410-9dda-f266cca0c9e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "address": "fa:16:3e:61:69:24", "network": {"id": "6cdaa8da-4e85-47a7-84f8-76fb36b9391a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1996345903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86bd600007a042cea64439c21bd920b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa1f26e3-cb", "ovs_interfaceid": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.942 348329 DEBUG nova.network.os_vif_util [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Converting VIF {"id": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "address": "fa:16:3e:61:69:24", "network": {"id": "6cdaa8da-4e85-47a7-84f8-76fb36b9391a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1996345903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86bd600007a042cea64439c21bd920b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa1f26e3-cb", "ovs_interfaceid": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.943 348329 DEBUG nova.network.os_vif_util [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:69:24,bridge_name='br-int',has_traffic_filtering=True,id=fa1f26e3-cb99-46c5-b405-4fbdc024f8cf,network=Network(6cdaa8da-4e85-47a7-84f8-76fb36b9391a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa1f26e3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.944 348329 DEBUG nova.objects.instance [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59c4595c-fa0d-4410-9dda-f266cca0c9e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:57:33 compute-0 podman[442079]: 2025-12-03 18:57:33.856530547 +0000 UTC m=+0.037918217 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.959 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:57:33 compute-0 nova_compute[348325]:  <uuid>59c4595c-fa0d-4410-9dda-f266cca0c9e4</uuid>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  <name>instance-00000008</name>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  <memory>131072</memory>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <nova:name>tempest-ServersTestManualDisk-server-409272578</nova:name>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:57:32</nova:creationTime>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <nova:flavor name="m1.nano">
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <nova:memory>128</nova:memory>
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <nova:user uuid="5d41669fc94f4811803f4ebf54dbcebc">tempest-ServersTestManualDisk-1695020550-project-member</nova:user>
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <nova:project uuid="86bd600007a042cea64439c21bd920b0">tempest-ServersTestManualDisk-1695020550</nova:project>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="55982930-937b-484e-96ee-69e406a48023"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <nova:port uuid="fa1f26e3-cb99-46c5-b405-4fbdc024f8cf">
Dec  3 18:57:33 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <system>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <entry name="serial">59c4595c-fa0d-4410-9dda-f266cca0c9e4</entry>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <entry name="uuid">59c4595c-fa0d-4410-9dda-f266cca0c9e4</entry>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    </system>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  <os>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  </os>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  <features>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  </features>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk">
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      </source>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk.config">
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      </source>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:57:33 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:61:69:24"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <target dev="tapfa1f26e3-cb"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/59c4595c-fa0d-4410-9dda-f266cca0c9e4/console.log" append="off"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <video>
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    </video>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:57:33 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:57:33 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:57:33 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:57:33 compute-0 nova_compute[348325]: </domain>
Dec  3 18:57:33 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.960 348329 DEBUG nova.compute.manager [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Preparing to wait for external event network-vif-plugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.961 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Acquiring lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.961 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.961 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.962 348329 DEBUG nova.virt.libvirt.vif [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:57:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-409272578',display_name='tempest-ServersTestManualDisk-server-409272578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-409272578',id=8,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyEylQdsOZgqNvI6p6U81aBCMQI9I6HE/rZ64AA6VXtw55ZDq33/c4iUiQRgkwJcFnMLXJcfswV9BTF6Bz2FtXf8FspT1mJN+g5WZ340+UlnXyRTxwyquLZEBQoD68AgA==',key_name='tempest-keypair-230422229',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86bd600007a042cea64439c21bd920b0',ramdisk_id='',reservation_id='r-5t4dy7m4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1695020550',owner_user_name='tempest-ServersTestManualDisk-1695020550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:57:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5d41669fc94f4811803f4ebf54dbcebc',uuid=59c4595c-fa0d-4410-9dda-f266cca0c9e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "address": "fa:16:3e:61:69:24", "network": {"id": "6cdaa8da-4e85-47a7-84f8-76fb36b9391a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1996345903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86bd600007a042cea64439c21bd920b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa1f26e3-cb", "ovs_interfaceid": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.962 348329 DEBUG nova.network.os_vif_util [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Converting VIF {"id": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "address": "fa:16:3e:61:69:24", "network": {"id": "6cdaa8da-4e85-47a7-84f8-76fb36b9391a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1996345903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86bd600007a042cea64439c21bd920b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa1f26e3-cb", "ovs_interfaceid": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.963 348329 DEBUG nova.network.os_vif_util [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:69:24,bridge_name='br-int',has_traffic_filtering=True,id=fa1f26e3-cb99-46c5-b405-4fbdc024f8cf,network=Network(6cdaa8da-4e85-47a7-84f8-76fb36b9391a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa1f26e3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:57:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.963 348329 DEBUG os_vif [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:69:24,bridge_name='br-int',has_traffic_filtering=True,id=fa1f26e3-cb99-46c5-b405-4fbdc024f8cf,network=Network(6cdaa8da-4e85-47a7-84f8-76fb36b9391a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa1f26e3-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.964 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.964 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.965 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.967 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.967 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1f26e3-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.968 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa1f26e3-cb, col_values=(('external_ids', {'iface-id': 'fa1f26e3-cb99-46c5-b405-4fbdc024f8cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:69:24', 'vm-uuid': '59c4595c-fa0d-4410-9dda-f266cca0c9e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.969 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92c0a4e1a9e9c26c45ab8077b64b27dad3be736ad48cf300bfd2cb85db47dab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 18:57:33 compute-0 NetworkManager[49087]: <info>  [1764788253.9715] manager: (tapfa1f26e3-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.984 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:57:33 compute-0 nova_compute[348325]: 2025-12-03 18:57:33.985 348329 INFO os_vif [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:69:24,bridge_name='br-int',has_traffic_filtering=True,id=fa1f26e3-cb99-46c5-b405-4fbdc024f8cf,network=Network(6cdaa8da-4e85-47a7-84f8-76fb36b9391a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa1f26e3-cb')#033[00m
Dec  3 18:57:33 compute-0 podman[442079]: 2025-12-03 18:57:33.989659587 +0000 UTC m=+0.171047257 container init a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:57:33 compute-0 podman[442079]: 2025-12-03 18:57:33.998368029 +0000 UTC m=+0.179755679 container start a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:57:34 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[442097]: [NOTICE]   (442103) : New worker (442105) forked
Dec  3 18:57:34 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[442097]: [NOTICE]   (442103) : Loading success.
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.039 348329 DEBUG oslo_concurrency.lockutils [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Acquiring lock "67a42a04-754c-489b-9aeb-12d68487d4d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.039 348329 DEBUG oslo_concurrency.lockutils [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.040 348329 DEBUG oslo_concurrency.lockutils [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Acquiring lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.040 348329 DEBUG oslo_concurrency.lockutils [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.040 348329 DEBUG oslo_concurrency.lockutils [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.041 348329 INFO nova.compute.manager [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Terminating instance#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.042 348329 DEBUG nova.compute.manager [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.063 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.064 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.064 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] No VIF found with MAC fa:16:3e:61:69:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.065 348329 INFO nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Using config drive#033[00m
Dec  3 18:57:34 compute-0 kernel: tap856126a0-9e (unregistering): left promiscuous mode
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.090 348329 DEBUG nova.storage.rbd_utils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] rbd image 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:34 compute-0 NetworkManager[49087]: <info>  [1764788254.0965] device (tap856126a0-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 18:57:34 compute-0 ovn_controller[89305]: 2025-12-03T18:57:34Z|00080|binding|INFO|Releasing lport 856126a0-9e4c-43b6-9e00-a5fade4f2abf from this chassis (sb_readonly=0)
Dec  3 18:57:34 compute-0 ovn_controller[89305]: 2025-12-03T18:57:34Z|00081|binding|INFO|Setting lport 856126a0-9e4c-43b6-9e00-a5fade4f2abf down in Southbound
Dec  3 18:57:34 compute-0 ovn_controller[89305]: 2025-12-03T18:57:34Z|00082|binding|INFO|Removing iface tap856126a0-9e ovn-installed in OVS
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.106 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.108 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.114 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:0a:cb 10.100.0.3'], port_security=['fa:16:3e:38:0a:cb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '67a42a04-754c-489b-9aeb-12d68487d4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eda31966af554b3b92f3e55bf4c324c2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '082dee80-d213-44eb-9d8e-4eef7ebaf4fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb886f75-8d99-4958-8b8c-820bcf2c4689, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=856126a0-9e4c-43b6-9e00-a5fade4f2abf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.115 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 856126a0-9e4c-43b6-9e00-a5fade4f2abf in datapath 59ddf46a-73fc-4bab-9c16-51c1e99fd6f1 unbound from our chassis#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.117 286999 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 59ddf46a-73fc-4bab-9c16-51c1e99fd6f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.118 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[44de6e81-b7d8-42e5-be9d-8eb6a5ff2ff5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.119 286999 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1 namespace which is not needed anymore#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.131 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:34 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec  3 18:57:34 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 6.576s CPU time.
Dec  3 18:57:34 compute-0 systemd-machined[138702]: Machine qemu-6-instance-00000006 terminated.
Dec  3 18:57:34 compute-0 neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1[441293]: [NOTICE]   (441298) : haproxy version is 2.8.14-c23fe91
Dec  3 18:57:34 compute-0 neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1[441293]: [NOTICE]   (441298) : path to executable is /usr/sbin/haproxy
Dec  3 18:57:34 compute-0 neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1[441293]: [WARNING]  (441298) : Exiting Master process...
Dec  3 18:57:34 compute-0 neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1[441293]: [WARNING]  (441298) : Exiting Master process...
Dec  3 18:57:34 compute-0 neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1[441293]: [ALERT]    (441298) : Current worker (441300) exited with code 143 (Terminated)
Dec  3 18:57:34 compute-0 neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1[441293]: [WARNING]  (441298) : All workers exited. Exiting... (0)
Dec  3 18:57:34 compute-0 systemd[1]: libpod-bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e.scope: Deactivated successfully.
Dec  3 18:57:34 compute-0 podman[442154]: 2025-12-03 18:57:34.283286705 +0000 UTC m=+0.063269646 container died bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.282 348329 INFO nova.virt.libvirt.driver [-] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Instance destroyed successfully.#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.282 348329 DEBUG nova.objects.instance [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lazy-loading 'resources' on Instance uuid 67a42a04-754c-489b-9aeb-12d68487d4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.296 348329 DEBUG nova.virt.libvirt.vif [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:57:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1180311471',display_name='tempest-ServerAddressesTestJSON-server-1180311471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1180311471',id=6,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:57:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eda31966af554b3b92f3e55bf4c324c2',ramdisk_id='',reservation_id='r-0hmdbmdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-939673438',owner_user_name='tempest-ServerAddressesTestJSON-939673438-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:57:28Z,user_data=None,user_id='0d49b8a0584445d09f42f33a803d4dfe',uuid=67a42a04-754c-489b-9aeb-12d68487d4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "address": "fa:16:3e:38:0a:cb", "network": {"id": "59ddf46a-73fc-4bab-9c16-51c1e99fd6f1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-549721878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda31966af554b3b92f3e55bf4c324c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856126a0-9e", "ovs_interfaceid": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.298 348329 DEBUG nova.network.os_vif_util [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Converting VIF {"id": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "address": "fa:16:3e:38:0a:cb", "network": {"id": "59ddf46a-73fc-4bab-9c16-51c1e99fd6f1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-549721878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda31966af554b3b92f3e55bf4c324c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap856126a0-9e", "ovs_interfaceid": "856126a0-9e4c-43b6-9e00-a5fade4f2abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.300 348329 DEBUG nova.network.os_vif_util [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:0a:cb,bridge_name='br-int',has_traffic_filtering=True,id=856126a0-9e4c-43b6-9e00-a5fade4f2abf,network=Network(59ddf46a-73fc-4bab-9c16-51c1e99fd6f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856126a0-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.301 348329 DEBUG os_vif [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:0a:cb,bridge_name='br-int',has_traffic_filtering=True,id=856126a0-9e4c-43b6-9e00-a5fade4f2abf,network=Network(59ddf46a-73fc-4bab-9c16-51c1e99fd6f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856126a0-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.310 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.311 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap856126a0-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.313 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.316 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e-userdata-shm.mount: Deactivated successfully.
Dec  3 18:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3d9fde7bf1872c9e512fdce77a21415bf6168ff113654fcdd9f984f763a5bbe-merged.mount: Deactivated successfully.
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.330 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:34 compute-0 podman[442154]: 2025-12-03 18:57:34.332966518 +0000 UTC m=+0.112949459 container cleanup bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.334 348329 INFO os_vif [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:0a:cb,bridge_name='br-int',has_traffic_filtering=True,id=856126a0-9e4c-43b6-9e00-a5fade4f2abf,network=Network(59ddf46a-73fc-4bab-9c16-51c1e99fd6f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap856126a0-9e')#033[00m
Dec  3 18:57:34 compute-0 systemd[1]: libpod-conmon-bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e.scope: Deactivated successfully.
Dec  3 18:57:34 compute-0 podman[442222]: 2025-12-03 18:57:34.419760228 +0000 UTC m=+0.057207458 container remove bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 18:57:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.430 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[0078df65-7258-4155-9da4-c5b92c49c245]: (4, ('Wed Dec  3 06:57:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1 (bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e)\nbb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e\nWed Dec  3 06:57:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1 (bb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e)\nbb1cf235ae31add9dd248b325bfc8490b5426af14d5f974e49032724129a2d3e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.432 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[8c76d074-434a-49ca-925d-9211ae967476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.433 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59ddf46a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.434 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:34 compute-0 kernel: tap59ddf46a-70: left promiscuous mode
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.441 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.442 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0dd2b0-196a-415f-b1a9-8f98a136f674]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.463 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[57fd313c-c2a2-4c54-94fe-ca2104113133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.466 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.467 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[068c7084-36e8-41f9-ac64-ee308aa0d2b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.485 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[ed474bd4-3c79-4b85-b368-686883c62879]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652597, 'reachable_time': 41107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442249, 'error': None, 'target': 'ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d59ddf46a\x2d73fc\x2d4bab\x2d9c16\x2d51c1e99fd6f1.mount: Deactivated successfully.
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.488 287110 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-59ddf46a-73fc-4bab-9c16-51c1e99fd6f1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 18:57:34 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:34.488 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[902b1948-4a95-471f-a2c1-bf66d0fd1349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:57:34 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454863203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.606 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.612 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.638 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.658 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.658 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.664 348329 DEBUG nova.network.neutron [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Updating instance_info_cache with network_info: [{"id": "df320f97-b085-4528-84d7-d0b7e40923a4", "address": "fa:16:3e:50:3c:c0", "network": {"id": "42b9af68-948e-4963-878b-ef07a3b43e57", "bridge": "br-int", "label": "tempest-ServersTestJSON-1382517516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8356f2a17c1f4ae2a3e07cdcc6e6f6da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf320f97-b0", "ovs_interfaceid": "df320f97-b085-4528-84d7-d0b7e40923a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.681 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Releasing lock "refresh_cache-47c940fc-9b39-48b6-a183-42c0547ac964" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.682 348329 DEBUG nova.compute.manager [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Instance network_info: |[{"id": "df320f97-b085-4528-84d7-d0b7e40923a4", "address": "fa:16:3e:50:3c:c0", "network": {"id": "42b9af68-948e-4963-878b-ef07a3b43e57", "bridge": "br-int", "label": "tempest-ServersTestJSON-1382517516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8356f2a17c1f4ae2a3e07cdcc6e6f6da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf320f97-b0", "ovs_interfaceid": "df320f97-b085-4528-84d7-d0b7e40923a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.683 348329 DEBUG oslo_concurrency.lockutils [req-3e18dcd6-d9f5-4083-96c0-4767d4f1e02d req-a94ce93a-6508-46e2-b91e-0528b5a6b786 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-47c940fc-9b39-48b6-a183-42c0547ac964" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.683 348329 DEBUG nova.network.neutron [req-3e18dcd6-d9f5-4083-96c0-4767d4f1e02d req-a94ce93a-6508-46e2-b91e-0528b5a6b786 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Refreshing network info cache for port df320f97-b085-4528-84d7-d0b7e40923a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.687 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Start _get_guest_xml network_info=[{"id": "df320f97-b085-4528-84d7-d0b7e40923a4", "address": "fa:16:3e:50:3c:c0", "network": {"id": "42b9af68-948e-4963-878b-ef07a3b43e57", "bridge": "br-int", "label": "tempest-ServersTestJSON-1382517516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8356f2a17c1f4ae2a3e07cdcc6e6f6da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf320f97-b0", "ovs_interfaceid": "df320f97-b085-4528-84d7-d0b7e40923a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '55982930-937b-484e-96ee-69e406a48023'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.693 348329 WARNING nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.702 348329 DEBUG nova.virt.libvirt.host [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.704 348329 DEBUG nova.virt.libvirt.host [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.708 348329 DEBUG nova.virt.libvirt.host [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.709 348329 DEBUG nova.virt.libvirt.host [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.709 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.710 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:56:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a94cfbfb-a20a-4689-ac91-e7436db75880',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.710 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.711 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.711 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.712 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.712 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.713 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.713 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.714 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.714 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.715 348329 DEBUG nova.virt.hardware [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.717 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.796 348329 INFO nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Creating config drive at /var/lib/nova/instances/59c4595c-fa0d-4410-9dda-f266cca0c9e4/disk.config#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.812 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59c4595c-fa0d-4410-9dda-f266cca0c9e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoiu62muy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.961 348329 INFO nova.virt.libvirt.driver [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Deleting instance files /var/lib/nova/instances/67a42a04-754c-489b-9aeb-12d68487d4d9_del#033[00m
Dec  3 18:57:34 compute-0 nova_compute[348325]: 2025-12-03 18:57:34.963 348329 INFO nova.virt.libvirt.driver [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Deletion of /var/lib/nova/instances/67a42a04-754c-489b-9aeb-12d68487d4d9_del complete#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.049 348329 INFO nova.compute.manager [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.050 348329 DEBUG oslo.service.loopingcall [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.051 348329 DEBUG nova.compute.manager [-] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.052 348329 DEBUG nova.network.neutron [-] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 18:57:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:57:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2692478003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.178 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.220 348329 DEBUG nova.storage.rbd_utils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] rbd image 47c940fc-9b39-48b6-a183-42c0547ac964_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.235 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.366 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59c4595c-fa0d-4410-9dda-f266cca0c9e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoiu62muy" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.427 348329 DEBUG nova.storage.rbd_utils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] rbd image 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.439 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59c4595c-fa0d-4410-9dda-f266cca0c9e4/disk.config 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.641 348329 DEBUG oslo_concurrency.processutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59c4595c-fa0d-4410-9dda-f266cca0c9e4/disk.config 59c4595c-fa0d-4410-9dda-f266cca0c9e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.643 348329 INFO nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Deleting local config drive /var/lib/nova/instances/59c4595c-fa0d-4410-9dda-f266cca0c9e4/disk.config because it was imported into RBD.#033[00m
Dec  3 18:57:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 218 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.7 MiB/s wr, 197 op/s
Dec  3 18:57:35 compute-0 kernel: tapfa1f26e3-cb: entered promiscuous mode
Dec  3 18:57:35 compute-0 NetworkManager[49087]: <info>  [1764788255.7190] manager: (tapfa1f26e3-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Dec  3 18:57:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:57:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3499576685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.721 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:35 compute-0 ovn_controller[89305]: 2025-12-03T18:57:35Z|00083|binding|INFO|Claiming lport fa1f26e3-cb99-46c5-b405-4fbdc024f8cf for this chassis.
Dec  3 18:57:35 compute-0 ovn_controller[89305]: 2025-12-03T18:57:35Z|00084|binding|INFO|fa1f26e3-cb99-46c5-b405-4fbdc024f8cf: Claiming fa:16:3e:61:69:24 10.100.0.14
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.729 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:35 compute-0 NetworkManager[49087]: <info>  [1764788255.7315] device (tapfa1f26e3-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:57:35 compute-0 NetworkManager[49087]: <info>  [1764788255.7321] device (tapfa1f26e3-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.734 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.753 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:69:24 10.100.0.14'], port_security=['fa:16:3e:61:69:24 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '59c4595c-fa0d-4410-9dda-f266cca0c9e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6cdaa8da-4e85-47a7-84f8-76fb36b9391a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86bd600007a042cea64439c21bd920b0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '142c06d6-c365-4857-882f-8a558306b53d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b6be5c-95ca-4372-beb1-f543a67676cf, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=fa1f26e3-cb99-46c5-b405-4fbdc024f8cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.754 286999 INFO neutron.agent.ovn.metadata.agent [-] Port fa1f26e3-cb99-46c5-b405-4fbdc024f8cf in datapath 6cdaa8da-4e85-47a7-84f8-76fb36b9391a bound to our chassis#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.756 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6cdaa8da-4e85-47a7-84f8-76fb36b9391a#033[00m
Dec  3 18:57:35 compute-0 systemd-machined[138702]: New machine qemu-8-instance-00000008.
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.761 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.762 348329 DEBUG nova.virt.libvirt.vif [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:57:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1383669703',display_name='tempest-ServersTestJSON-server-1383669703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1383669703',id=9,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3vKpPdVoGefI6WDHjH4oyoBPwanxLle9S+lz6fT5yBHqHXfcuia4MYuTcaOYAt4ZduC0R3h+eUyW8pi3ofrwS/9Sdj6knUryEscX3qGYO1YU3dERdGjQkdvKc/i/cU+Q==',key_name='tempest-keypair-342267088',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8356f2a17c1f4ae2a3e07cdcc6e6f6da',ramdisk_id='',reservation_id='r-gg1jpipc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1597108075',owner_user_name='tempest-ServersTestJSON-1597108075-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:57:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d1fe8dd2488b4bf3ab1fb503816c5da9',uuid=47c940fc-9b39-48b6-a183-42c0547ac964,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df320f97-b085-4528-84d7-d0b7e40923a4", "address": "fa:16:3e:50:3c:c0", "network": {"id": "42b9af68-948e-4963-878b-ef07a3b43e57", "bridge": "br-int", "label": "tempest-ServersTestJSON-1382517516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8356f2a17c1f4ae2a3e07cdcc6e6f6da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf320f97-b0", "ovs_interfaceid": "df320f97-b085-4528-84d7-d0b7e40923a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.762 348329 DEBUG nova.network.os_vif_util [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Converting VIF {"id": "df320f97-b085-4528-84d7-d0b7e40923a4", "address": "fa:16:3e:50:3c:c0", "network": {"id": "42b9af68-948e-4963-878b-ef07a3b43e57", "bridge": "br-int", "label": "tempest-ServersTestJSON-1382517516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8356f2a17c1f4ae2a3e07cdcc6e6f6da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf320f97-b0", "ovs_interfaceid": "df320f97-b085-4528-84d7-d0b7e40923a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.764 348329 DEBUG nova.network.os_vif_util [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:c0,bridge_name='br-int',has_traffic_filtering=True,id=df320f97-b085-4528-84d7-d0b7e40923a4,network=Network(42b9af68-948e-4963-878b-ef07a3b43e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf320f97-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.765 348329 DEBUG nova.objects.instance [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lazy-loading 'pci_devices' on Instance uuid 47c940fc-9b39-48b6-a183-42c0547ac964 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.767 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[91f05504-dfaa-452f-b933-c6763d0cab93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.768 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6cdaa8da-41 in ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.770 411759 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6cdaa8da-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.771 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[56dc62cd-7e6f-43c2-8f1b-6c12e3f0307e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.772 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9c4530-8d39-4452-b90a-6a55690b932e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.784 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[1345c821-9b76-4b06-8b54-3958fce603f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.803 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:57:35 compute-0 nova_compute[348325]:  <uuid>47c940fc-9b39-48b6-a183-42c0547ac964</uuid>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  <name>instance-00000009</name>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  <memory>131072</memory>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <nova:name>tempest-ServersTestJSON-server-1383669703</nova:name>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:57:34</nova:creationTime>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <nova:flavor name="m1.nano">
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <nova:memory>128</nova:memory>
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <nova:user uuid="d1fe8dd2488b4bf3ab1fb503816c5da9">tempest-ServersTestJSON-1597108075-project-member</nova:user>
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <nova:project uuid="8356f2a17c1f4ae2a3e07cdcc6e6f6da">tempest-ServersTestJSON-1597108075</nova:project>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="55982930-937b-484e-96ee-69e406a48023"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <nova:port uuid="df320f97-b085-4528-84d7-d0b7e40923a4">
Dec  3 18:57:35 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <system>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <entry name="serial">47c940fc-9b39-48b6-a183-42c0547ac964</entry>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <entry name="uuid">47c940fc-9b39-48b6-a183-42c0547ac964</entry>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    </system>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  <os>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  </os>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  <features>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  </features>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/47c940fc-9b39-48b6-a183-42c0547ac964_disk">
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      </source>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/47c940fc-9b39-48b6-a183-42c0547ac964_disk.config">
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      </source>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:57:35 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:50:3c:c0"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <target dev="tapdf320f97-b0"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/47c940fc-9b39-48b6-a183-42c0547ac964/console.log" append="off"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <video>
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    </video>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:57:35 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:57:35 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:57:35 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:57:35 compute-0 nova_compute[348325]: </domain>
Dec  3 18:57:35 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.804 348329 DEBUG nova.compute.manager [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Preparing to wait for external event network-vif-plugged-df320f97-b085-4528-84d7-d0b7e40923a4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.805 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Acquiring lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.806 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.806 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.808 348329 DEBUG nova.virt.libvirt.vif [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:57:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1383669703',display_name='tempest-ServersTestJSON-server-1383669703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1383669703',id=9,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3vKpPdVoGefI6WDHjH4oyoBPwanxLle9S+lz6fT5yBHqHXfcuia4MYuTcaOYAt4ZduC0R3h+eUyW8pi3ofrwS/9Sdj6knUryEscX3qGYO1YU3dERdGjQkdvKc/i/cU+Q==',key_name='tempest-keypair-342267088',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8356f2a17c1f4ae2a3e07cdcc6e6f6da',ramdisk_id='',reservation_id='r-gg1jpipc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1597108075',owner_user_name='tempest-ServersTestJSON-1597108075-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:57:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d1fe8dd2488b4bf3ab1fb503816c5da9',uuid=47c940fc-9b39-48b6-a183-42c0547ac964,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df320f97-b085-4528-84d7-d0b7e40923a4", "address": "fa:16:3e:50:3c:c0", "network": {"id": "42b9af68-948e-4963-878b-ef07a3b43e57", "bridge": "br-int", "label": "tempest-ServersTestJSON-1382517516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8356f2a17c1f4ae2a3e07cdcc6e6f6da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf320f97-b0", "ovs_interfaceid": "df320f97-b085-4528-84d7-d0b7e40923a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.809 348329 DEBUG nova.network.os_vif_util [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Converting VIF {"id": "df320f97-b085-4528-84d7-d0b7e40923a4", "address": "fa:16:3e:50:3c:c0", "network": {"id": "42b9af68-948e-4963-878b-ef07a3b43e57", "bridge": "br-int", "label": "tempest-ServersTestJSON-1382517516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8356f2a17c1f4ae2a3e07cdcc6e6f6da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf320f97-b0", "ovs_interfaceid": "df320f97-b085-4528-84d7-d0b7e40923a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.810 348329 DEBUG nova.network.os_vif_util [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:c0,bridge_name='br-int',has_traffic_filtering=True,id=df320f97-b085-4528-84d7-d0b7e40923a4,network=Network(42b9af68-948e-4963-878b-ef07a3b43e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf320f97-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.811 348329 DEBUG os_vif [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:c0,bridge_name='br-int',has_traffic_filtering=True,id=df320f97-b085-4528-84d7-d0b7e40923a4,network=Network(42b9af68-948e-4963-878b-ef07a3b43e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf320f97-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.812 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.812 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.813 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.813 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[78b1e3f1-c589-4e21-a39e-8414553a124e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.816 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.817 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf320f97-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.817 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf320f97-b0, col_values=(('external_ids', {'iface-id': 'df320f97-b085-4528-84d7-d0b7e40923a4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:3c:c0', 'vm-uuid': '47c940fc-9b39-48b6-a183-42c0547ac964'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.821 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:35 compute-0 NetworkManager[49087]: <info>  [1764788255.8225] manager: (tapdf320f97-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.824 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.835 348329 DEBUG nova.network.neutron [req-4b04e203-6e90-440b-9bdc-6ba63596dd93 req-8efda565-ba8e-4d05-bfdc-495e3076af00 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Updated VIF entry in instance network info cache for port fa1f26e3-cb99-46c5-b405-4fbdc024f8cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.835 348329 DEBUG nova.network.neutron [req-4b04e203-6e90-440b-9bdc-6ba63596dd93 req-8efda565-ba8e-4d05-bfdc-495e3076af00 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Updating instance_info_cache with network_info: [{"id": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "address": "fa:16:3e:61:69:24", "network": {"id": "6cdaa8da-4e85-47a7-84f8-76fb36b9391a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1996345903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86bd600007a042cea64439c21bd920b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa1f26e3-cb", "ovs_interfaceid": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.836 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.837 348329 INFO os_vif [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:c0,bridge_name='br-int',has_traffic_filtering=True,id=df320f97-b085-4528-84d7-d0b7e40923a4,network=Network(42b9af68-948e-4963-878b-ef07a3b43e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf320f97-b0')#033[00m
Dec  3 18:57:35 compute-0 ovn_controller[89305]: 2025-12-03T18:57:35Z|00085|binding|INFO|Setting lport fa1f26e3-cb99-46c5-b405-4fbdc024f8cf ovn-installed in OVS
Dec  3 18:57:35 compute-0 ovn_controller[89305]: 2025-12-03T18:57:35Z|00086|binding|INFO|Setting lport fa1f26e3-cb99-46c5-b405-4fbdc024f8cf up in Southbound
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.844 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6d5a91-e434-4d23-9cb8-e34908adbe1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.848 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.855 348329 DEBUG oslo_concurrency.lockutils [req-4b04e203-6e90-440b-9bdc-6ba63596dd93 req-8efda565-ba8e-4d05-bfdc-495e3076af00 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-59c4595c-fa0d-4410-9dda-f266cca0c9e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:57:35 compute-0 NetworkManager[49087]: <info>  [1764788255.8564] manager: (tap6cdaa8da-40): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.857 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3bb668-b3d2-47e9-90d2-6af830f6a29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.885 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.886 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.886 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] No VIF found with MAC fa:16:3e:50:3c:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.886 348329 INFO nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Using config drive#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.891 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd9abd6-1861-4f48-bdbe-e61afbbb7f71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.896 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcd2046-ba42-4d50-83e6-c2832dba7097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 nova_compute[348325]: 2025-12-03 18:57:35.914 348329 DEBUG nova.storage.rbd_utils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] rbd image 47c940fc-9b39-48b6-a183-42c0547ac964_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:35 compute-0 NetworkManager[49087]: <info>  [1764788255.9221] device (tap6cdaa8da-40): carrier: link connected
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.926 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[21c187d1-08c2-400c-8593-4c3fe1c2d6ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.942 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[adcabb87-b2b6-48be-bdd0-db96fa855763]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6cdaa8da-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:a4:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653586, 'reachable_time': 31225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442408, 'error': None, 'target': 'ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.955 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[5c43be33-02d8-4984-ae87-4e790f52d3b6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:a4dc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653586, 'tstamp': 653586}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 442409, 'error': None, 'target': 'ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.969 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[1dbf9ad8-781d-4e4a-9465-ffe8c21bd339]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6cdaa8da-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:a4:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653586, 'reachable_time': 31225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 442410, 'error': None, 'target': 'ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:35.999 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[1e161637-f005-4315-89e7-c83f8f2e5f2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:36.075 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7d004f-883d-44f8-8ede-2043367f4f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:36.077 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cdaa8da-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:36.079 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:36.079 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6cdaa8da-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:36 compute-0 NetworkManager[49087]: <info>  [1764788256.0828] manager: (tap6cdaa8da-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.082 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:36 compute-0 kernel: tap6cdaa8da-40: entered promiscuous mode
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.091 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:36.094 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6cdaa8da-40, col_values=(('external_ids', {'iface-id': '3ec83a16-d304-4a23-9fc9-bc37d2bda5a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.096 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:36 compute-0 ovn_controller[89305]: 2025-12-03T18:57:36Z|00087|binding|INFO|Releasing lport 3ec83a16-d304-4a23-9fc9-bc37d2bda5a7 from this chassis (sb_readonly=0)
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.129 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.131 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:36.133 286999 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6cdaa8da-4e85-47a7-84f8-76fb36b9391a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6cdaa8da-4e85-47a7-84f8-76fb36b9391a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:36.134 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[2abca521-d70a-4cba-8384-df59db92e90c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:36.135 286999 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: global
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    log         /dev/log local0 debug
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    log-tag     haproxy-metadata-proxy-6cdaa8da-4e85-47a7-84f8-76fb36b9391a
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    user        root
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    group       root
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    maxconn     1024
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    pidfile     /var/lib/neutron/external/pids/6cdaa8da-4e85-47a7-84f8-76fb36b9391a.pid.haproxy
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    daemon
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: defaults
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    log global
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    mode http
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    option httplog
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    option dontlognull
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    option http-server-close
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    option forwardfor
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    retries                 3
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    timeout http-request    30s
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    timeout connect         30s
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    timeout client          32s
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    timeout server          32s
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    timeout http-keep-alive 30s
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: listen listener
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    bind 169.254.169.254:80
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]:    http-request add-header X-OVN-Network-ID 6cdaa8da-4e85-47a7-84f8-76fb36b9391a
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 18:57:36 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:36.137 286999 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a', 'env', 'PROCESS_TAG=haproxy-6cdaa8da-4e85-47a7-84f8-76fb36b9391a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6cdaa8da-4e85-47a7-84f8-76fb36b9391a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.298 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788256.298226, 59c4595c-fa0d-4410-9dda-f266cca0c9e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.299 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] VM Started (Lifecycle Event)#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.329 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.335 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788256.2983675, 59c4595c-fa0d-4410-9dda-f266cca0c9e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.335 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] VM Paused (Lifecycle Event)#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.361 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.366 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.397 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.496 348329 DEBUG nova.compute.manager [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Received event network-vif-unplugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.496 348329 DEBUG oslo_concurrency.lockutils [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.497 348329 DEBUG oslo_concurrency.lockutils [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.497 348329 DEBUG oslo_concurrency.lockutils [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.497 348329 DEBUG nova.compute.manager [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] No waiting events found dispatching network-vif-unplugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.497 348329 DEBUG nova.compute.manager [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Received event network-vif-unplugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.497 348329 DEBUG nova.compute.manager [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Received event network-vif-plugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.497 348329 DEBUG oslo_concurrency.lockutils [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.498 348329 DEBUG oslo_concurrency.lockutils [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.498 348329 DEBUG oslo_concurrency.lockutils [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.498 348329 DEBUG nova.compute.manager [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] No waiting events found dispatching network-vif-plugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.498 348329 WARNING nova.compute.manager [req-20c9c1a4-09f3-439a-a53f-93a5d4c920e5 req-f3520b57-d52b-437b-9a4a-1cf55a801a5d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Received unexpected event network-vif-plugged-856126a0-9e4c-43b6-9e00-a5fade4f2abf for instance with vm_state active and task_state deleting.#033[00m
Dec  3 18:57:36 compute-0 podman[442490]: 2025-12-03 18:57:36.55089092 +0000 UTC m=+0.058868219 container create 00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  3 18:57:36 compute-0 systemd[1]: Started libpod-conmon-00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827.scope.
Dec  3 18:57:36 compute-0 podman[442490]: 2025-12-03 18:57:36.523136672 +0000 UTC m=+0.031114001 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.623 348329 INFO nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Creating config drive at /var/lib/nova/instances/47c940fc-9b39-48b6-a183-42c0547ac964/disk.config#033[00m
Dec  3 18:57:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.629 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47c940fc-9b39-48b6-a183-42c0547ac964/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvvticqdt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d7f6bfb76a92d6ae02465e0e0fbc351f4f265b766209368740f46ca4bc2c1f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 18:57:36 compute-0 podman[442490]: 2025-12-03 18:57:36.647239512 +0000 UTC m=+0.155216831 container init 00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 18:57:36 compute-0 podman[442490]: 2025-12-03 18:57:36.653902245 +0000 UTC m=+0.161879544 container start 00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 18:57:36 compute-0 neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a[442505]: [NOTICE]   (442510) : New worker (442514) forked
Dec  3 18:57:36 compute-0 neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a[442505]: [NOTICE]   (442510) : Loading success.
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.767 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47c940fc-9b39-48b6-a183-42c0547ac964/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvvticqdt" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.801 348329 DEBUG nova.storage.rbd_utils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] rbd image 47c940fc-9b39-48b6-a183-42c0547ac964_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.807 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47c940fc-9b39-48b6-a183-42c0547ac964/disk.config 47c940fc-9b39-48b6-a183-42c0547ac964_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.832 348329 DEBUG nova.compute.manager [req-aba871f6-9276-458c-8e70-714c5be2c632 req-b6448809-eb9f-4093-9c26-3c1f8e3a8b9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.833 348329 DEBUG oslo_concurrency.lockutils [req-aba871f6-9276-458c-8e70-714c5be2c632 req-b6448809-eb9f-4093-9c26-3c1f8e3a8b9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.833 348329 DEBUG oslo_concurrency.lockutils [req-aba871f6-9276-458c-8e70-714c5be2c632 req-b6448809-eb9f-4093-9c26-3c1f8e3a8b9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.833 348329 DEBUG oslo_concurrency.lockutils [req-aba871f6-9276-458c-8e70-714c5be2c632 req-b6448809-eb9f-4093-9c26-3c1f8e3a8b9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.834 348329 DEBUG nova.compute.manager [req-aba871f6-9276-458c-8e70-714c5be2c632 req-b6448809-eb9f-4093-9c26-3c1f8e3a8b9e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Processing event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.835 348329 DEBUG nova.compute.manager [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.839 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788256.8396783, eff2304f-0e67-4c93-ae65-20d4ddb87625 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.840 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.842 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.847 348329 INFO nova.virt.libvirt.driver [-] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Instance spawned successfully.#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.847 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.863 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.869 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.874 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.875 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.875 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.876 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.876 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.878 348329 DEBUG nova.virt.libvirt.driver [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.890 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.946 348329 INFO nova.compute.manager [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Took 11.88 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.947 348329 DEBUG nova.compute.manager [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.961 348329 DEBUG nova.network.neutron [-] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:36 compute-0 nova_compute[348325]: 2025-12-03 18:57:36.990 348329 INFO nova.compute.manager [-] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Took 1.94 seconds to deallocate network for instance.#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.013 348329 DEBUG oslo_concurrency.processutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47c940fc-9b39-48b6-a183-42c0547ac964/disk.config 47c940fc-9b39-48b6-a183-42c0547ac964_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.015 348329 INFO nova.compute.manager [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Took 12.85 seconds to build instance.#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.016 348329 INFO nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Deleting local config drive /var/lib/nova/instances/47c940fc-9b39-48b6-a183-42c0547ac964/disk.config because it was imported into RBD.#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.024 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.047 348329 DEBUG oslo_concurrency.lockutils [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.048 348329 DEBUG oslo_concurrency.lockutils [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.055 348329 DEBUG oslo_concurrency.lockutils [None req-5c5d3ba4-19ba-43cb-b093-c3c148837ae7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:37 compute-0 NetworkManager[49087]: <info>  [1764788257.0831] manager: (tapdf320f97-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Dec  3 18:57:37 compute-0 kernel: tapdf320f97-b0: entered promiscuous mode
Dec  3 18:57:37 compute-0 systemd-udevd[442466]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:57:37 compute-0 ovn_controller[89305]: 2025-12-03T18:57:37Z|00088|binding|INFO|Claiming lport df320f97-b085-4528-84d7-d0b7e40923a4 for this chassis.
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.089 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:37 compute-0 ovn_controller[89305]: 2025-12-03T18:57:37Z|00089|binding|INFO|df320f97-b085-4528-84d7-d0b7e40923a4: Claiming fa:16:3e:50:3c:c0 10.100.0.13
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.093 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.096 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.102 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:3c:c0 10.100.0.13'], port_security=['fa:16:3e:50:3c:c0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '47c940fc-9b39-48b6-a183-42c0547ac964', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42b9af68-948e-4963-878b-ef07a3b43e57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8356f2a17c1f4ae2a3e07cdcc6e6f6da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '70c27b20-70e8-4341-9a33-af610f2b2903', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bc91bca-2d27-4247-9561-64ecb11b3efc, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=df320f97-b085-4528-84d7-d0b7e40923a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.103 286999 INFO neutron.agent.ovn.metadata.agent [-] Port df320f97-b085-4528-84d7-d0b7e40923a4 in datapath 42b9af68-948e-4963-878b-ef07a3b43e57 bound to our chassis#033[00m
Dec  3 18:57:37 compute-0 NetworkManager[49087]: <info>  [1764788257.1036] device (tapdf320f97-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.106 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 42b9af68-948e-4963-878b-ef07a3b43e57#033[00m
Dec  3 18:57:37 compute-0 NetworkManager[49087]: <info>  [1764788257.1119] device (tapdf320f97-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.116 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[84a3d540-9e06-4b64-93ef-fca9e636333d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.117 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap42b9af68-91 in ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.119 411759 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap42b9af68-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.119 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[13d2f448-7909-4103-a796-21db448ddcbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.120 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[df24bcce-14a6-41e7-9270-47c6a8af43b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.140 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[2083c4bf-d255-4235-8cda-70d8aac60d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 systemd-machined[138702]: New machine qemu-9-instance-00000009.
Dec  3 18:57:37 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec  3 18:57:37 compute-0 ovn_controller[89305]: 2025-12-03T18:57:37Z|00090|binding|INFO|Setting lport df320f97-b085-4528-84d7-d0b7e40923a4 ovn-installed in OVS
Dec  3 18:57:37 compute-0 ovn_controller[89305]: 2025-12-03T18:57:37Z|00091|binding|INFO|Setting lport df320f97-b085-4528-84d7-d0b7e40923a4 up in Southbound
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.167 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa3d457-2b5a-42c5-a217-18091a265a03]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.169 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.187 348329 DEBUG nova.network.neutron [req-3e18dcd6-d9f5-4083-96c0-4767d4f1e02d req-a94ce93a-6508-46e2-b91e-0528b5a6b786 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Updated VIF entry in instance network info cache for port df320f97-b085-4528-84d7-d0b7e40923a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.188 348329 DEBUG nova.network.neutron [req-3e18dcd6-d9f5-4083-96c0-4767d4f1e02d req-a94ce93a-6508-46e2-b91e-0528b5a6b786 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Updating instance_info_cache with network_info: [{"id": "df320f97-b085-4528-84d7-d0b7e40923a4", "address": "fa:16:3e:50:3c:c0", "network": {"id": "42b9af68-948e-4963-878b-ef07a3b43e57", "bridge": "br-int", "label": "tempest-ServersTestJSON-1382517516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8356f2a17c1f4ae2a3e07cdcc6e6f6da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf320f97-b0", "ovs_interfaceid": "df320f97-b085-4528-84d7-d0b7e40923a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.192 348329 DEBUG oslo_concurrency.processutils [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.199 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[949d24d1-fcc0-4de4-8f15-71864e66c1cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.208 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f9df1943-4508-4d30-93ef-894f87c6015b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 NetworkManager[49087]: <info>  [1764788257.2098] manager: (tap42b9af68-90): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.224 348329 DEBUG oslo_concurrency.lockutils [req-3e18dcd6-d9f5-4083-96c0-4767d4f1e02d req-a94ce93a-6508-46e2-b91e-0528b5a6b786 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-47c940fc-9b39-48b6-a183-42c0547ac964" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.239 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[e60e1c1d-cc59-4f67-a48d-90481e7bcb75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.243 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[fa906472-713d-4bac-9134-bfecdaa347bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 NetworkManager[49087]: <info>  [1764788257.2642] device (tap42b9af68-90): carrier: link connected
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.269 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[96847cbd-3365-4928-b2ff-a16f07440525]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.286 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f63ec566-240d-4c6b-a578-c6dcdab71a7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap42b9af68-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:6e:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653720, 'reachable_time': 28802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442608, 'error': None, 'target': 'ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.303 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ab56b6-17f1-4552-ae42-b9a8858af934]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe10:6e46'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653720, 'tstamp': 653720}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 442610, 'error': None, 'target': 'ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.317 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[b66aa74a-fb74-4432-8382-09759911ec84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap42b9af68-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:6e:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653720, 'reachable_time': 28802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 442611, 'error': None, 'target': 'ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.346 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[b3000569-7f88-40a0-8819-527a0b17e66b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.403 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[d56af896-819a-4cb3-9914-6b1bf245ab5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.406 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42b9af68-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.406 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.407 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42b9af68-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:37 compute-0 NetworkManager[49087]: <info>  [1764788257.4103] manager: (tap42b9af68-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.410 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:37 compute-0 kernel: tap42b9af68-90: entered promiscuous mode
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.416 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.417 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap42b9af68-90, col_values=(('external_ids', {'iface-id': '86a532e1-740b-4244-9c38-3dc4d023bacf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.418 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:37 compute-0 ovn_controller[89305]: 2025-12-03T18:57:37Z|00092|binding|INFO|Releasing lport 86a532e1-740b-4244-9c38-3dc4d023bacf from this chassis (sb_readonly=0)
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.438 286999 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/42b9af68-948e-4963-878b-ef07a3b43e57.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/42b9af68-948e-4963-878b-ef07a3b43e57.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.439 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[7604e5d4-6014-4977-a496-0f6013c421fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.441 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.442 286999 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: global
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    log         /dev/log local0 debug
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    log-tag     haproxy-metadata-proxy-42b9af68-948e-4963-878b-ef07a3b43e57
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    user        root
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    group       root
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    maxconn     1024
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    pidfile     /var/lib/neutron/external/pids/42b9af68-948e-4963-878b-ef07a3b43e57.pid.haproxy
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    daemon
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: defaults
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    log global
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    mode http
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    option httplog
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    option dontlognull
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    option http-server-close
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    option forwardfor
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    retries                 3
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    timeout http-request    30s
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    timeout connect         30s
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    timeout client          32s
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    timeout server          32s
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    timeout http-keep-alive 30s
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: listen listener
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    bind 169.254.169.254:80
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]:    http-request add-header X-OVN-Network-ID 42b9af68-948e-4963-878b-ef07a3b43e57
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 18:57:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:37.444 286999 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57', 'env', 'PROCESS_TAG=haproxy-42b9af68-948e-4963-878b-ef07a3b43e57', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/42b9af68-948e-4963-878b-ef07a3b43e57.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 18:57:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:57:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/91952304' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:57:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:57:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/91952304' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:57:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:57:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1152588201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:57:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 205 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.8 MiB/s wr, 188 op/s
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.704 348329 DEBUG oslo_concurrency.processutils [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.716 348329 DEBUG nova.compute.provider_tree [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.736 348329 DEBUG nova.scheduler.client.report [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.752 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788257.7524755, 47c940fc-9b39-48b6-a183-42c0547ac964 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.753 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] VM Started (Lifecycle Event)#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.759 348329 DEBUG oslo_concurrency.lockutils [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.781 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.786 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788257.7526023, 47c940fc-9b39-48b6-a183-42c0547ac964 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.786 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] VM Paused (Lifecycle Event)#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.797 348329 INFO nova.scheduler.client.report [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Deleted allocations for instance 67a42a04-754c-489b-9aeb-12d68487d4d9#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.812 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.820 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:57:37 compute-0 podman[442703]: 2025-12-03 18:57:37.849612588 +0000 UTC m=+0.053216400 container create 2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.868 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:57:37 compute-0 systemd[1]: Started libpod-conmon-2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0.scope.
Dec  3 18:57:37 compute-0 podman[442703]: 2025-12-03 18:57:37.826882473 +0000 UTC m=+0.030486305 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:57:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d66dbc512761ecd457beeecd8b3e33555bc6c12204eb467c7472e69c38ad10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 18:57:37 compute-0 podman[442703]: 2025-12-03 18:57:37.955288818 +0000 UTC m=+0.158892650 container init 2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  3 18:57:37 compute-0 podman[442703]: 2025-12-03 18:57:37.971423923 +0000 UTC m=+0.175027735 container start 2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:57:37 compute-0 nova_compute[348325]: 2025-12-03 18:57:37.984 348329 DEBUG oslo_concurrency.lockutils [None req-259aaa3a-23e2-4bef-b896-a70eca550d71 0d49b8a0584445d09f42f33a803d4dfe eda31966af554b3b92f3e55bf4c324c2 - - default default] Lock "67a42a04-754c-489b-9aeb-12d68487d4d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:38 compute-0 neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57[442719]: [NOTICE]   (442723) : New worker (442725) forked
Dec  3 18:57:38 compute-0 neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57[442719]: [NOTICE]   (442723) : Loading success.
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.359 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.729 348329 DEBUG nova.compute.manager [req-ce045984-c5ed-4021-a105-ba8bf7f88fd9 req-d9c99ced-2845-4166-b387-84d40a35bf8d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Received event network-vif-plugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.730 348329 DEBUG oslo_concurrency.lockutils [req-ce045984-c5ed-4021-a105-ba8bf7f88fd9 req-d9c99ced-2845-4166-b387-84d40a35bf8d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.731 348329 DEBUG oslo_concurrency.lockutils [req-ce045984-c5ed-4021-a105-ba8bf7f88fd9 req-d9c99ced-2845-4166-b387-84d40a35bf8d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.732 348329 DEBUG oslo_concurrency.lockutils [req-ce045984-c5ed-4021-a105-ba8bf7f88fd9 req-d9c99ced-2845-4166-b387-84d40a35bf8d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.732 348329 DEBUG nova.compute.manager [req-ce045984-c5ed-4021-a105-ba8bf7f88fd9 req-d9c99ced-2845-4166-b387-84d40a35bf8d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Processing event network-vif-plugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.732 348329 DEBUG nova.compute.manager [req-ce045984-c5ed-4021-a105-ba8bf7f88fd9 req-d9c99ced-2845-4166-b387-84d40a35bf8d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Received event network-vif-plugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.733 348329 DEBUG oslo_concurrency.lockutils [req-ce045984-c5ed-4021-a105-ba8bf7f88fd9 req-d9c99ced-2845-4166-b387-84d40a35bf8d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.733 348329 DEBUG oslo_concurrency.lockutils [req-ce045984-c5ed-4021-a105-ba8bf7f88fd9 req-d9c99ced-2845-4166-b387-84d40a35bf8d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.733 348329 DEBUG oslo_concurrency.lockutils [req-ce045984-c5ed-4021-a105-ba8bf7f88fd9 req-d9c99ced-2845-4166-b387-84d40a35bf8d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.733 348329 DEBUG nova.compute.manager [req-ce045984-c5ed-4021-a105-ba8bf7f88fd9 req-d9c99ced-2845-4166-b387-84d40a35bf8d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] No waiting events found dispatching network-vif-plugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.734 348329 WARNING nova.compute.manager [req-ce045984-c5ed-4021-a105-ba8bf7f88fd9 req-d9c99ced-2845-4166-b387-84d40a35bf8d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Received unexpected event network-vif-plugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf for instance with vm_state building and task_state spawning.#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.734 348329 DEBUG nova.compute.manager [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.739 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.740 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788258.7400687, 59c4595c-fa0d-4410-9dda-f266cca0c9e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.740 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.746 348329 INFO nova.virt.libvirt.driver [-] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Instance spawned successfully.#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.746 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.763 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.771 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.780 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.780 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.781 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.781 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.782 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.782 348329 DEBUG nova.virt.libvirt.driver [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.809 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.893 348329 INFO nova.compute.manager [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Took 11.18 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.894 348329 DEBUG nova.compute.manager [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.980 348329 DEBUG nova.compute.manager [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.980 348329 DEBUG oslo_concurrency.lockutils [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.980 348329 DEBUG oslo_concurrency.lockutils [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.981 348329 DEBUG oslo_concurrency.lockutils [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.981 348329 DEBUG nova.compute.manager [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] No waiting events found dispatching network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.981 348329 WARNING nova.compute.manager [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received unexpected event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 for instance with vm_state active and task_state None.#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.982 348329 DEBUG nova.compute.manager [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Received event network-vif-deleted-856126a0-9e4c-43b6-9e00-a5fade4f2abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.982 348329 DEBUG nova.compute.manager [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Received event network-vif-plugged-df320f97-b085-4528-84d7-d0b7e40923a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.984 348329 DEBUG oslo_concurrency.lockutils [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.985 348329 DEBUG oslo_concurrency.lockutils [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.986 348329 DEBUG oslo_concurrency.lockutils [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.987 348329 DEBUG nova.compute.manager [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Processing event network-vif-plugged-df320f97-b085-4528-84d7-d0b7e40923a4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.987 348329 DEBUG nova.compute.manager [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Received event network-vif-plugged-df320f97-b085-4528-84d7-d0b7e40923a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.989 348329 DEBUG oslo_concurrency.lockutils [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.990 348329 DEBUG oslo_concurrency.lockutils [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.991 348329 DEBUG oslo_concurrency.lockutils [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.993 348329 DEBUG nova.compute.manager [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] No waiting events found dispatching network-vif-plugged-df320f97-b085-4528-84d7-d0b7e40923a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.993 348329 WARNING nova.compute.manager [req-94cf4cc1-b988-4b2e-a483-aa53d4bbe397 req-13a98c47-8a6c-4107-a550-73a00311f6f9 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Received unexpected event network-vif-plugged-df320f97-b085-4528-84d7-d0b7e40923a4 for instance with vm_state building and task_state spawning.#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.996 348329 INFO nova.compute.manager [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Took 12.39 seconds to build instance.#033[00m
Dec  3 18:57:38 compute-0 nova_compute[348325]: 2025-12-03 18:57:38.998 348329 DEBUG nova.compute.manager [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.012 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.013 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788259.01282, 47c940fc-9b39-48b6-a183-42c0547ac964 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.013 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.020 348329 INFO nova.virt.libvirt.driver [-] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Instance spawned successfully.#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.020 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.023 348329 DEBUG oslo_concurrency.lockutils [None req-dbf57251-6c18-422c-b708-a2823282075b 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.031 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.038 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.046 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.046 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.047 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.047 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.048 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.048 348329 DEBUG nova.virt.libvirt.driver [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.057 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.109 348329 INFO nova.compute.manager [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Took 10.11 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.110 348329 DEBUG nova.compute.manager [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.175 348329 INFO nova.compute.manager [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Took 12.46 seconds to build instance.#033[00m
Dec  3 18:57:39 compute-0 nova_compute[348325]: 2025-12-03 18:57:39.192 348329 DEBUG oslo_concurrency.lockutils [None req-7550e430-6c4f-4410-a59d-d85100221125 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 196 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.2 MiB/s wr, 193 op/s
Dec  3 18:57:40 compute-0 NetworkManager[49087]: <info>  [1764788260.0936] manager: (patch-br-int-to-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec  3 18:57:40 compute-0 NetworkManager[49087]: <info>  [1764788260.0951] manager: (patch-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Dec  3 18:57:40 compute-0 nova_compute[348325]: 2025-12-03 18:57:40.098 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:40 compute-0 nova_compute[348325]: 2025-12-03 18:57:40.208 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:40 compute-0 ovn_controller[89305]: 2025-12-03T18:57:40Z|00093|binding|INFO|Releasing lport 3ec83a16-d304-4a23-9fc9-bc37d2bda5a7 from this chassis (sb_readonly=0)
Dec  3 18:57:40 compute-0 ovn_controller[89305]: 2025-12-03T18:57:40Z|00094|binding|INFO|Releasing lport b52268a2-5f2a-45ba-8c23-e32c70c8253f from this chassis (sb_readonly=0)
Dec  3 18:57:40 compute-0 ovn_controller[89305]: 2025-12-03T18:57:40Z|00095|binding|INFO|Releasing lport 86a532e1-740b-4244-9c38-3dc4d023bacf from this chassis (sb_readonly=0)
Dec  3 18:57:40 compute-0 nova_compute[348325]: 2025-12-03 18:57:40.249 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:40 compute-0 nova_compute[348325]: 2025-12-03 18:57:40.821 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:41 compute-0 nova_compute[348325]: 2025-12-03 18:57:41.201 348329 DEBUG nova.compute.manager [req-cd1dc56c-209d-46ea-91ff-5c799f2af19d req-0bd57da5-297f-45fa-9c74-7a09fba12b99 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-changed-b709b4ab-585a-4aed-9f06-3c9650d54c09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:41 compute-0 nova_compute[348325]: 2025-12-03 18:57:41.202 348329 DEBUG nova.compute.manager [req-cd1dc56c-209d-46ea-91ff-5c799f2af19d req-0bd57da5-297f-45fa-9c74-7a09fba12b99 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Refreshing instance network info cache due to event network-changed-b709b4ab-585a-4aed-9f06-3c9650d54c09. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:57:41 compute-0 nova_compute[348325]: 2025-12-03 18:57:41.203 348329 DEBUG oslo_concurrency.lockutils [req-cd1dc56c-209d-46ea-91ff-5c799f2af19d req-0bd57da5-297f-45fa-9c74-7a09fba12b99 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:57:41 compute-0 nova_compute[348325]: 2025-12-03 18:57:41.203 348329 DEBUG oslo_concurrency.lockutils [req-cd1dc56c-209d-46ea-91ff-5c799f2af19d req-0bd57da5-297f-45fa-9c74-7a09fba12b99 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:57:41 compute-0 nova_compute[348325]: 2025-12-03 18:57:41.204 348329 DEBUG nova.network.neutron [req-cd1dc56c-209d-46ea-91ff-5c799f2af19d req-0bd57da5-297f-45fa-9c74-7a09fba12b99 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Refreshing network info cache for port b709b4ab-585a-4aed-9f06-3c9650d54c09 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:57:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 196 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.2 MiB/s wr, 264 op/s
Dec  3 18:57:41 compute-0 ovn_controller[89305]: 2025-12-03T18:57:41Z|00096|binding|INFO|Releasing lport 3ec83a16-d304-4a23-9fc9-bc37d2bda5a7 from this chassis (sb_readonly=0)
Dec  3 18:57:41 compute-0 ovn_controller[89305]: 2025-12-03T18:57:41Z|00097|binding|INFO|Releasing lport b52268a2-5f2a-45ba-8c23-e32c70c8253f from this chassis (sb_readonly=0)
Dec  3 18:57:41 compute-0 ovn_controller[89305]: 2025-12-03T18:57:41Z|00098|binding|INFO|Releasing lport 86a532e1-740b-4244-9c38-3dc4d023bacf from this chassis (sb_readonly=0)
Dec  3 18:57:42 compute-0 nova_compute[348325]: 2025-12-03 18:57:42.006 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:42 compute-0 nova_compute[348325]: 2025-12-03 18:57:42.753 348329 DEBUG nova.compute.manager [req-5d068b6a-4621-4e02-9f36-353e94a48232 req-690dfd9c-c20f-47cd-ad56-bbbaf11407cc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Received event network-changed-df320f97-b085-4528-84d7-d0b7e40923a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:42 compute-0 nova_compute[348325]: 2025-12-03 18:57:42.754 348329 DEBUG nova.compute.manager [req-5d068b6a-4621-4e02-9f36-353e94a48232 req-690dfd9c-c20f-47cd-ad56-bbbaf11407cc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Refreshing instance network info cache due to event network-changed-df320f97-b085-4528-84d7-d0b7e40923a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:57:42 compute-0 nova_compute[348325]: 2025-12-03 18:57:42.755 348329 DEBUG oslo_concurrency.lockutils [req-5d068b6a-4621-4e02-9f36-353e94a48232 req-690dfd9c-c20f-47cd-ad56-bbbaf11407cc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-47c940fc-9b39-48b6-a183-42c0547ac964" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:57:42 compute-0 nova_compute[348325]: 2025-12-03 18:57:42.756 348329 DEBUG oslo_concurrency.lockutils [req-5d068b6a-4621-4e02-9f36-353e94a48232 req-690dfd9c-c20f-47cd-ad56-bbbaf11407cc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-47c940fc-9b39-48b6-a183-42c0547ac964" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:57:42 compute-0 nova_compute[348325]: 2025-12-03 18:57:42.757 348329 DEBUG nova.network.neutron [req-5d068b6a-4621-4e02-9f36-353e94a48232 req-690dfd9c-c20f-47cd-ad56-bbbaf11407cc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Refreshing network info cache for port df320f97-b085-4528-84d7-d0b7e40923a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:57:43 compute-0 nova_compute[348325]: 2025-12-03 18:57:43.288 348329 DEBUG nova.network.neutron [req-cd1dc56c-209d-46ea-91ff-5c799f2af19d req-0bd57da5-297f-45fa-9c74-7a09fba12b99 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Updated VIF entry in instance network info cache for port b709b4ab-585a-4aed-9f06-3c9650d54c09. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:57:43 compute-0 nova_compute[348325]: 2025-12-03 18:57:43.289 348329 DEBUG nova.network.neutron [req-cd1dc56c-209d-46ea-91ff-5c799f2af19d req-0bd57da5-297f-45fa-9c74-7a09fba12b99 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Updating instance_info_cache with network_info: [{"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:43 compute-0 nova_compute[348325]: 2025-12-03 18:57:43.319 348329 DEBUG oslo_concurrency.lockutils [req-cd1dc56c-209d-46ea-91ff-5c799f2af19d req-0bd57da5-297f-45fa-9c74-7a09fba12b99 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:57:43 compute-0 nova_compute[348325]: 2025-12-03 18:57:43.362 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 196 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 1024 KiB/s wr, 249 op/s
Dec  3 18:57:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:57:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:57:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:57:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:57:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:57:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.113 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.198 348329 DEBUG oslo_concurrency.lockutils [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Acquiring lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.199 348329 DEBUG oslo_concurrency.lockutils [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.199 348329 DEBUG oslo_concurrency.lockutils [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Acquiring lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.200 348329 DEBUG oslo_concurrency.lockutils [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.200 348329 DEBUG oslo_concurrency.lockutils [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.202 348329 INFO nova.compute.manager [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Terminating instance#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.203 348329 DEBUG nova.compute.manager [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.209 348329 DEBUG oslo_concurrency.lockutils [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Acquiring lock "47c940fc-9b39-48b6-a183-42c0547ac964" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.209 348329 DEBUG oslo_concurrency.lockutils [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.210 348329 DEBUG oslo_concurrency.lockutils [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Acquiring lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.211 348329 DEBUG oslo_concurrency.lockutils [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.211 348329 DEBUG oslo_concurrency.lockutils [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.213 348329 INFO nova.compute.manager [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Terminating instance#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.214 348329 DEBUG nova.compute.manager [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 18:57:44 compute-0 kernel: tapfa1f26e3-cb (unregistering): left promiscuous mode
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.287 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 ovn_controller[89305]: 2025-12-03T18:57:44Z|00099|binding|INFO|Releasing lport fa1f26e3-cb99-46c5-b405-4fbdc024f8cf from this chassis (sb_readonly=0)
Dec  3 18:57:44 compute-0 ovn_controller[89305]: 2025-12-03T18:57:44Z|00100|binding|INFO|Setting lport fa1f26e3-cb99-46c5-b405-4fbdc024f8cf down in Southbound
Dec  3 18:57:44 compute-0 NetworkManager[49087]: <info>  [1764788264.2928] device (tapfa1f26e3-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 18:57:44 compute-0 ovn_controller[89305]: 2025-12-03T18:57:44Z|00101|binding|INFO|Removing iface tapfa1f26e3-cb ovn-installed in OVS
Dec  3 18:57:44 compute-0 kernel: tapdf320f97-b0 (unregistering): left promiscuous mode
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.301 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 NetworkManager[49087]: <info>  [1764788264.3125] device (tapdf320f97-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.324 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.331 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 ovn_controller[89305]: 2025-12-03T18:57:44Z|00102|binding|INFO|Releasing lport df320f97-b085-4528-84d7-d0b7e40923a4 from this chassis (sb_readonly=1)
Dec  3 18:57:44 compute-0 ovn_controller[89305]: 2025-12-03T18:57:44Z|00103|binding|INFO|Removing iface tapdf320f97-b0 ovn-installed in OVS
Dec  3 18:57:44 compute-0 ovn_controller[89305]: 2025-12-03T18:57:44Z|00104|if_status|INFO|Not setting lport df320f97-b085-4528-84d7-d0b7e40923a4 down as sb is readonly
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.333 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  3 18:57:44 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 6.082s CPU time.
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.359 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 systemd-machined[138702]: Machine qemu-8-instance-00000008 terminated.
Dec  3 18:57:44 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec  3 18:57:44 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 6.146s CPU time.
Dec  3 18:57:44 compute-0 systemd-machined[138702]: Machine qemu-9-instance-00000009 terminated.
Dec  3 18:57:44 compute-0 ovn_controller[89305]: 2025-12-03T18:57:44Z|00105|binding|INFO|Setting lport df320f97-b085-4528-84d7-d0b7e40923a4 down in Southbound
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.422 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:69:24 10.100.0.14'], port_security=['fa:16:3e:61:69:24 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '59c4595c-fa0d-4410-9dda-f266cca0c9e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6cdaa8da-4e85-47a7-84f8-76fb36b9391a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86bd600007a042cea64439c21bd920b0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '142c06d6-c365-4857-882f-8a558306b53d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b6be5c-95ca-4372-beb1-f543a67676cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=fa1f26e3-cb99-46c5-b405-4fbdc024f8cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.423 286999 INFO neutron.agent.ovn.metadata.agent [-] Port fa1f26e3-cb99-46c5-b405-4fbdc024f8cf in datapath 6cdaa8da-4e85-47a7-84f8-76fb36b9391a unbound from our chassis#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.426 286999 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6cdaa8da-4e85-47a7-84f8-76fb36b9391a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.427 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[acc0d84e-8bbe-4de1-b852-31dc21477c0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.428 286999 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a namespace which is not needed anymore#033[00m
Dec  3 18:57:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.439 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:3c:c0 10.100.0.13'], port_security=['fa:16:3e:50:3c:c0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '47c940fc-9b39-48b6-a183-42c0547ac964', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42b9af68-948e-4963-878b-ef07a3b43e57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8356f2a17c1f4ae2a3e07cdcc6e6f6da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '70c27b20-70e8-4341-9a33-af610f2b2903', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bc91bca-2d27-4247-9561-64ecb11b3efc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=df320f97-b085-4528-84d7-d0b7e40923a4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:57:44 compute-0 NetworkManager[49087]: <info>  [1764788264.4462] manager: (tapdf320f97-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.450 348329 INFO nova.virt.libvirt.driver [-] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Instance destroyed successfully.#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.451 348329 DEBUG nova.objects.instance [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lazy-loading 'resources' on Instance uuid 59c4595c-fa0d-4410-9dda-f266cca0c9e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.462 348329 INFO nova.virt.libvirt.driver [-] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Instance destroyed successfully.#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.463 348329 DEBUG nova.objects.instance [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lazy-loading 'resources' on Instance uuid 47c940fc-9b39-48b6-a183-42c0547ac964 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.483 348329 DEBUG nova.virt.libvirt.vif [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:57:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-409272578',display_name='tempest-ServersTestManualDisk-server-409272578',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-409272578',id=8,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyEylQdsOZgqNvI6p6U81aBCMQI9I6HE/rZ64AA6VXtw55ZDq33/c4iUiQRgkwJcFnMLXJcfswV9BTF6Bz2FtXf8FspT1mJN+g5WZ340+UlnXyRTxwyquLZEBQoD68AgA==',key_name='tempest-keypair-230422229',keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:57:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='86bd600007a042cea64439c21bd920b0',ramdisk_id='',reservation_id='r-5t4dy7m4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1695020550',owner_user_name='tempest-ServersTestManualDisk-1695020550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:57:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5d41669fc94f4811803f4ebf54dbcebc',uuid=59c4595c-fa0d-4410-9dda-f266cca0c9e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "address": "fa:16:3e:61:69:24", "network": {"id": "6cdaa8da-4e85-47a7-84f8-76fb36b9391a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1996345903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86bd600007a042cea64439c21bd920b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa1f26e3-cb", "ovs_interfaceid": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.483 348329 DEBUG nova.network.os_vif_util [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Converting VIF {"id": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "address": "fa:16:3e:61:69:24", "network": {"id": "6cdaa8da-4e85-47a7-84f8-76fb36b9391a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1996345903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86bd600007a042cea64439c21bd920b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa1f26e3-cb", "ovs_interfaceid": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.484 348329 DEBUG nova.network.os_vif_util [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:69:24,bridge_name='br-int',has_traffic_filtering=True,id=fa1f26e3-cb99-46c5-b405-4fbdc024f8cf,network=Network(6cdaa8da-4e85-47a7-84f8-76fb36b9391a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa1f26e3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.484 348329 DEBUG os_vif [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:69:24,bridge_name='br-int',has_traffic_filtering=True,id=fa1f26e3-cb99-46c5-b405-4fbdc024f8cf,network=Network(6cdaa8da-4e85-47a7-84f8-76fb36b9391a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa1f26e3-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.487 348329 DEBUG nova.virt.libvirt.vif [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:57:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1383669703',display_name='tempest-ServersTestJSON-server-1383669703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1383669703',id=9,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3vKpPdVoGefI6WDHjH4oyoBPwanxLle9S+lz6fT5yBHqHXfcuia4MYuTcaOYAt4ZduC0R3h+eUyW8pi3ofrwS/9Sdj6knUryEscX3qGYO1YU3dERdGjQkdvKc/i/cU+Q==',key_name='tempest-keypair-342267088',keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:57:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8356f2a17c1f4ae2a3e07cdcc6e6f6da',ramdisk_id='',reservation_id='r-gg1jpipc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1597108075',owner_user_name='tempest-ServersTestJSON-1597108075-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:57:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d1fe8dd2488b4bf3ab1fb503816c5da9',uuid=47c940fc-9b39-48b6-a183-42c0547ac964,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df320f97-b085-4528-84d7-d0b7e40923a4", "address": "fa:16:3e:50:3c:c0", "network": {"id": "42b9af68-948e-4963-878b-ef07a3b43e57", "bridge": "br-int", "label": "tempest-ServersTestJSON-1382517516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8356f2a17c1f4ae2a3e07cdcc6e6f6da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf320f97-b0", "ovs_interfaceid": "df320f97-b085-4528-84d7-d0b7e40923a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.487 348329 DEBUG nova.network.os_vif_util [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Converting VIF {"id": "df320f97-b085-4528-84d7-d0b7e40923a4", "address": "fa:16:3e:50:3c:c0", "network": {"id": "42b9af68-948e-4963-878b-ef07a3b43e57", "bridge": "br-int", "label": "tempest-ServersTestJSON-1382517516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8356f2a17c1f4ae2a3e07cdcc6e6f6da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf320f97-b0", "ovs_interfaceid": "df320f97-b085-4528-84d7-d0b7e40923a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.488 348329 DEBUG nova.network.os_vif_util [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:c0,bridge_name='br-int',has_traffic_filtering=True,id=df320f97-b085-4528-84d7-d0b7e40923a4,network=Network(42b9af68-948e-4963-878b-ef07a3b43e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf320f97-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.488 348329 DEBUG os_vif [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:c0,bridge_name='br-int',has_traffic_filtering=True,id=df320f97-b085-4528-84d7-d0b7e40923a4,network=Network(42b9af68-948e-4963-878b-ef07a3b43e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf320f97-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.489 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.490 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1f26e3-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.491 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.497 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.500 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.500 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.500 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf320f97-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.502 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.503 348329 INFO os_vif [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:69:24,bridge_name='br-int',has_traffic_filtering=True,id=fa1f26e3-cb99-46c5-b405-4fbdc024f8cf,network=Network(6cdaa8da-4e85-47a7-84f8-76fb36b9391a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa1f26e3-cb')#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.522 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.526 348329 INFO os_vif [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:c0,bridge_name='br-int',has_traffic_filtering=True,id=df320f97-b085-4528-84d7-d0b7e40923a4,network=Network(42b9af68-948e-4963-878b-ef07a3b43e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf320f97-b0')#033[00m
Dec  3 18:57:44 compute-0 neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a[442505]: [NOTICE]   (442510) : haproxy version is 2.8.14-c23fe91
Dec  3 18:57:44 compute-0 neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a[442505]: [NOTICE]   (442510) : path to executable is /usr/sbin/haproxy
Dec  3 18:57:44 compute-0 neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a[442505]: [WARNING]  (442510) : Exiting Master process...
Dec  3 18:57:44 compute-0 neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a[442505]: [ALERT]    (442510) : Current worker (442514) exited with code 143 (Terminated)
Dec  3 18:57:44 compute-0 neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a[442505]: [WARNING]  (442510) : All workers exited. Exiting... (0)
Dec  3 18:57:44 compute-0 systemd[1]: libpod-00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827.scope: Deactivated successfully.
Dec  3 18:57:44 compute-0 podman[442817]: 2025-12-03 18:57:44.655212427 +0000 UTC m=+0.078124668 container died 00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 18:57:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827-userdata-shm.mount: Deactivated successfully.
Dec  3 18:57:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-66d7f6bfb76a92d6ae02465e0e0fbc351f4f265b766209368740f46ca4bc2c1f-merged.mount: Deactivated successfully.
Dec  3 18:57:44 compute-0 podman[442817]: 2025-12-03 18:57:44.73270316 +0000 UTC m=+0.155615361 container cleanup 00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:57:44 compute-0 systemd[1]: libpod-conmon-00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827.scope: Deactivated successfully.
Dec  3 18:57:44 compute-0 podman[442854]: 2025-12-03 18:57:44.838716907 +0000 UTC m=+0.069898737 container remove 00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.846 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[79de99ef-2ed5-4862-8a63-fdcd4c512b49]: (4, ('Wed Dec  3 06:57:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a (00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827)\n00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827\nWed Dec  3 06:57:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a (00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827)\n00a98fc42fae7c3ad6e1b61af7e42c8a0a8ab4310659d95424c83f717c151827\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.849 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[00ce8020-44d6-430b-84f3-fe55e01bc4c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.851 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cdaa8da-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.852 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 kernel: tap6cdaa8da-40: left promiscuous mode
Dec  3 18:57:44 compute-0 nova_compute[348325]: 2025-12-03 18:57:44.872 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.874 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[81e3a07a-81ef-41ad-8534-70291dcf9a38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.890 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[4e02fc8a-ab7c-44ae-8d99-3177b684a07f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.891 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[60a9366c-f7e5-4ff8-9c6e-9b6733ff130c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.904 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc0f8f8-8a72-4cd0-a5c9-0706084461e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653577, 'reachable_time': 15312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442872, 'error': None, 'target': 'ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d6cdaa8da\x2d4e85\x2d47a7\x2d84f8\x2d76fb36b9391a.mount: Deactivated successfully.
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.907 287110 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6cdaa8da-4e85-47a7-84f8-76fb36b9391a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.907 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[236f6498-5198-4e57-8d42-9074e4607aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.908 286999 INFO neutron.agent.ovn.metadata.agent [-] Port df320f97-b085-4528-84d7-d0b7e40923a4 in datapath 42b9af68-948e-4963-878b-ef07a3b43e57 unbound from our chassis#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.909 286999 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 42b9af68-948e-4963-878b-ef07a3b43e57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.910 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2a5f18-b63b-469b-abb3-26577d85b40e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:44.911 286999 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57 namespace which is not needed anymore#033[00m
Dec  3 18:57:44 compute-0 podman[442868]: 2025-12-03 18:57:44.994002649 +0000 UTC m=+0.095567324 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:57:45 compute-0 podman[442870]: 2025-12-03 18:57:45.019233445 +0000 UTC m=+0.107971167 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, release=1755695350, architecture=x86_64, vendor=Red Hat, Inc.)
Dec  3 18:57:45 compute-0 podman[442867]: 2025-12-03 18:57:45.045984488 +0000 UTC m=+0.143567656 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:57:45 compute-0 neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57[442719]: [NOTICE]   (442723) : haproxy version is 2.8.14-c23fe91
Dec  3 18:57:45 compute-0 neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57[442719]: [NOTICE]   (442723) : path to executable is /usr/sbin/haproxy
Dec  3 18:57:45 compute-0 neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57[442719]: [WARNING]  (442723) : Exiting Master process...
Dec  3 18:57:45 compute-0 neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57[442719]: [ALERT]    (442723) : Current worker (442725) exited with code 143 (Terminated)
Dec  3 18:57:45 compute-0 neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57[442719]: [WARNING]  (442723) : All workers exited. Exiting... (0)
Dec  3 18:57:45 compute-0 systemd[1]: libpod-2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0.scope: Deactivated successfully.
Dec  3 18:57:45 compute-0 podman[442946]: 2025-12-03 18:57:45.123899601 +0000 UTC m=+0.080904877 container died 2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 18:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0-userdata-shm.mount: Deactivated successfully.
Dec  3 18:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-31d66dbc512761ecd457beeecd8b3e33555bc6c12204eb467c7472e69c38ad10-merged.mount: Deactivated successfully.
Dec  3 18:57:45 compute-0 podman[442946]: 2025-12-03 18:57:45.178572195 +0000 UTC m=+0.135577471 container cleanup 2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:57:45 compute-0 systemd[1]: libpod-conmon-2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0.scope: Deactivated successfully.
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.227 348329 INFO nova.virt.libvirt.driver [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Deleting instance files /var/lib/nova/instances/59c4595c-fa0d-4410-9dda-f266cca0c9e4_del#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.228 348329 INFO nova.virt.libvirt.driver [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Deletion of /var/lib/nova/instances/59c4595c-fa0d-4410-9dda-f266cca0c9e4_del complete#033[00m
Dec  3 18:57:45 compute-0 podman[442971]: 2025-12-03 18:57:45.294591848 +0000 UTC m=+0.081841470 container remove 2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:57:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:45.304 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e366f0b2-3d0d-42aa-bc9d-462e01b5072c]: (4, ('Wed Dec  3 06:57:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57 (2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0)\n2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0\nWed Dec  3 06:57:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57 (2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0)\n2432967a3abec4a73d06d5bc6acd8cd3d93f1b6a638f9fbd498a5d5e964f6db0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:45.306 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5d4556-1bc1-4ab4-aa1b-cad86ea07f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:45.308 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42b9af68-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:57:45 compute-0 kernel: tap42b9af68-90: left promiscuous mode
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.315 348329 INFO nova.virt.libvirt.driver [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Deleting instance files /var/lib/nova/instances/47c940fc-9b39-48b6-a183-42c0547ac964_del#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.316 348329 INFO nova.virt.libvirt.driver [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Deletion of /var/lib/nova/instances/47c940fc-9b39-48b6-a183-42c0547ac964_del complete#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.319 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:45 compute-0 ovn_controller[89305]: 2025-12-03T18:57:45Z|00106|binding|INFO|Releasing lport b52268a2-5f2a-45ba-8c23-e32c70c8253f from this chassis (sb_readonly=0)
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.329 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:45.332 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[868958a4-22a1-4a6f-b2ed-652df878f118]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.336 348329 INFO nova.compute.manager [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.336 348329 DEBUG oslo.service.loopingcall [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.337 348329 DEBUG nova.compute.manager [-] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.337 348329 DEBUG nova.network.neutron [-] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.349 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:45.359 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e28f33ce-94e9-44f7-822a-f3b6667a89d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:45.361 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fb59a5-6314-41bf-9e69-e143b2f8a45f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:45.376 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb21a81-1f32-4524-8c02-54201faa9dac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653713, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442984, 'error': None, 'target': 'ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:45.378 287110 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-42b9af68-948e-4963-878b-ef07a3b43e57 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 18:57:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:57:45.378 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[02614e6d-e4af-46d4-9af0-cab686c5f4b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.378 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.396 348329 INFO nova.compute.manager [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Took 1.18 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.396 348329 DEBUG oslo.service.loopingcall [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.398 348329 DEBUG nova.compute.manager [-] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.399 348329 DEBUG nova.network.neutron [-] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.451 348329 DEBUG nova.compute.manager [req-aba90ef3-cce7-4bfc-acd0-f310655dbbc0 req-f45fe15c-53bb-4966-9874-dc4f476245eb 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Received event network-vif-unplugged-df320f97-b085-4528-84d7-d0b7e40923a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.452 348329 DEBUG oslo_concurrency.lockutils [req-aba90ef3-cce7-4bfc-acd0-f310655dbbc0 req-f45fe15c-53bb-4966-9874-dc4f476245eb 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.452 348329 DEBUG oslo_concurrency.lockutils [req-aba90ef3-cce7-4bfc-acd0-f310655dbbc0 req-f45fe15c-53bb-4966-9874-dc4f476245eb 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.453 348329 DEBUG oslo_concurrency.lockutils [req-aba90ef3-cce7-4bfc-acd0-f310655dbbc0 req-f45fe15c-53bb-4966-9874-dc4f476245eb 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.453 348329 DEBUG nova.compute.manager [req-aba90ef3-cce7-4bfc-acd0-f310655dbbc0 req-f45fe15c-53bb-4966-9874-dc4f476245eb 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] No waiting events found dispatching network-vif-unplugged-df320f97-b085-4528-84d7-d0b7e40923a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.453 348329 DEBUG nova.compute.manager [req-aba90ef3-cce7-4bfc-acd0-f310655dbbc0 req-f45fe15c-53bb-4966-9874-dc4f476245eb 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Received event network-vif-unplugged-df320f97-b085-4528-84d7-d0b7e40923a4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.551 348329 DEBUG nova.compute.manager [req-c7570a33-a7e0-4ffe-aef5-e8817e62f088 req-cf440ad4-2536-4b92-8be0-bed79e120cad 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Received event network-changed-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.552 348329 DEBUG nova.compute.manager [req-c7570a33-a7e0-4ffe-aef5-e8817e62f088 req-cf440ad4-2536-4b92-8be0-bed79e120cad 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Refreshing instance network info cache due to event network-changed-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.552 348329 DEBUG oslo_concurrency.lockutils [req-c7570a33-a7e0-4ffe-aef5-e8817e62f088 req-cf440ad4-2536-4b92-8be0-bed79e120cad 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-59c4595c-fa0d-4410-9dda-f266cca0c9e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.552 348329 DEBUG oslo_concurrency.lockutils [req-c7570a33-a7e0-4ffe-aef5-e8817e62f088 req-cf440ad4-2536-4b92-8be0-bed79e120cad 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-59c4595c-fa0d-4410-9dda-f266cca0c9e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.553 348329 DEBUG nova.network.neutron [req-c7570a33-a7e0-4ffe-aef5-e8817e62f088 req-cf440ad4-2536-4b92-8be0-bed79e120cad 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Refreshing network info cache for port fa1f26e3-cb99-46c5-b405-4fbdc024f8cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:57:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 168 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 45 KiB/s wr, 273 op/s
Dec  3 18:57:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d42b9af68\x2d948e\x2d4963\x2d878b\x2def07a3b43e57.mount: Deactivated successfully.
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.825 348329 DEBUG nova.network.neutron [req-5d068b6a-4621-4e02-9f36-353e94a48232 req-690dfd9c-c20f-47cd-ad56-bbbaf11407cc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Updated VIF entry in instance network info cache for port df320f97-b085-4528-84d7-d0b7e40923a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.826 348329 DEBUG nova.network.neutron [req-5d068b6a-4621-4e02-9f36-353e94a48232 req-690dfd9c-c20f-47cd-ad56-bbbaf11407cc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Updating instance_info_cache with network_info: [{"id": "df320f97-b085-4528-84d7-d0b7e40923a4", "address": "fa:16:3e:50:3c:c0", "network": {"id": "42b9af68-948e-4963-878b-ef07a3b43e57", "bridge": "br-int", "label": "tempest-ServersTestJSON-1382517516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8356f2a17c1f4ae2a3e07cdcc6e6f6da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf320f97-b0", "ovs_interfaceid": "df320f97-b085-4528-84d7-d0b7e40923a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:45 compute-0 nova_compute[348325]: 2025-12-03 18:57:45.852 348329 DEBUG oslo_concurrency.lockutils [req-5d068b6a-4621-4e02-9f36-353e94a48232 req-690dfd9c-c20f-47cd-ad56-bbbaf11407cc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-47c940fc-9b39-48b6-a183-42c0547ac964" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:57:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 132 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 32 KiB/s wr, 263 op/s
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.709 348329 DEBUG nova.network.neutron [-] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.731 348329 INFO nova.compute.manager [-] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Took 2.33 seconds to deallocate network for instance.#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.791 348329 DEBUG nova.compute.manager [req-d7330468-d0e0-477e-a2bd-ef214c8dbd86 req-6924a039-1ea7-4ea3-99ce-89e65bd2b13b 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Received event network-vif-plugged-df320f97-b085-4528-84d7-d0b7e40923a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.792 348329 DEBUG oslo_concurrency.lockutils [req-d7330468-d0e0-477e-a2bd-ef214c8dbd86 req-6924a039-1ea7-4ea3-99ce-89e65bd2b13b 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.792 348329 DEBUG oslo_concurrency.lockutils [req-d7330468-d0e0-477e-a2bd-ef214c8dbd86 req-6924a039-1ea7-4ea3-99ce-89e65bd2b13b 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.793 348329 DEBUG oslo_concurrency.lockutils [req-d7330468-d0e0-477e-a2bd-ef214c8dbd86 req-6924a039-1ea7-4ea3-99ce-89e65bd2b13b 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.793 348329 DEBUG nova.compute.manager [req-d7330468-d0e0-477e-a2bd-ef214c8dbd86 req-6924a039-1ea7-4ea3-99ce-89e65bd2b13b 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] No waiting events found dispatching network-vif-plugged-df320f97-b085-4528-84d7-d0b7e40923a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.793 348329 WARNING nova.compute.manager [req-d7330468-d0e0-477e-a2bd-ef214c8dbd86 req-6924a039-1ea7-4ea3-99ce-89e65bd2b13b 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Received unexpected event network-vif-plugged-df320f97-b085-4528-84d7-d0b7e40923a4 for instance with vm_state active and task_state deleting.#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.813 348329 DEBUG oslo_concurrency.lockutils [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.814 348329 DEBUG oslo_concurrency.lockutils [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.883 348329 DEBUG nova.compute.manager [req-6bf071ed-d155-4716-9d4c-07d743aaf0b6 req-4e963056-b681-45a6-aace-da484aa5a422 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Received event network-vif-unplugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.883 348329 DEBUG oslo_concurrency.lockutils [req-6bf071ed-d155-4716-9d4c-07d743aaf0b6 req-4e963056-b681-45a6-aace-da484aa5a422 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.884 348329 DEBUG oslo_concurrency.lockutils [req-6bf071ed-d155-4716-9d4c-07d743aaf0b6 req-4e963056-b681-45a6-aace-da484aa5a422 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.884 348329 DEBUG oslo_concurrency.lockutils [req-6bf071ed-d155-4716-9d4c-07d743aaf0b6 req-4e963056-b681-45a6-aace-da484aa5a422 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.885 348329 DEBUG nova.compute.manager [req-6bf071ed-d155-4716-9d4c-07d743aaf0b6 req-4e963056-b681-45a6-aace-da484aa5a422 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] No waiting events found dispatching network-vif-unplugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.885 348329 DEBUG nova.compute.manager [req-6bf071ed-d155-4716-9d4c-07d743aaf0b6 req-4e963056-b681-45a6-aace-da484aa5a422 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Received event network-vif-unplugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 18:57:47 compute-0 nova_compute[348325]: 2025-12-03 18:57:47.924 348329 DEBUG oslo_concurrency.processutils [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.244 348329 DEBUG nova.network.neutron [-] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.274 348329 INFO nova.compute.manager [-] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Took 2.94 seconds to deallocate network for instance.#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.340 348329 DEBUG oslo_concurrency.lockutils [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.358 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:57:48 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863428611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.417 348329 DEBUG oslo_concurrency.processutils [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.428 348329 DEBUG nova.compute.provider_tree [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.441 348329 DEBUG nova.scheduler.client.report [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.488 348329 DEBUG oslo_concurrency.lockutils [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.490 348329 DEBUG oslo_concurrency.lockutils [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.532 348329 INFO nova.scheduler.client.report [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Deleted allocations for instance 47c940fc-9b39-48b6-a183-42c0547ac964#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.574 348329 DEBUG oslo_concurrency.processutils [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.749 348329 DEBUG oslo_concurrency.lockutils [None req-3ed8303b-2611-40f1-a47d-2333a69a4a74 d1fe8dd2488b4bf3ab1fb503816c5da9 8356f2a17c1f4ae2a3e07cdcc6e6f6da - - default default] Lock "47c940fc-9b39-48b6-a183-42c0547ac964" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.878 348329 DEBUG nova.network.neutron [req-c7570a33-a7e0-4ffe-aef5-e8817e62f088 req-cf440ad4-2536-4b92-8be0-bed79e120cad 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Updated VIF entry in instance network info cache for port fa1f26e3-cb99-46c5-b405-4fbdc024f8cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.878 348329 DEBUG nova.network.neutron [req-c7570a33-a7e0-4ffe-aef5-e8817e62f088 req-cf440ad4-2536-4b92-8be0-bed79e120cad 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Updating instance_info_cache with network_info: [{"id": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "address": "fa:16:3e:61:69:24", "network": {"id": "6cdaa8da-4e85-47a7-84f8-76fb36b9391a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1996345903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86bd600007a042cea64439c21bd920b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa1f26e3-cb", "ovs_interfaceid": "fa1f26e3-cb99-46c5-b405-4fbdc024f8cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:57:48 compute-0 nova_compute[348325]: 2025-12-03 18:57:48.901 348329 DEBUG oslo_concurrency.lockutils [req-c7570a33-a7e0-4ffe-aef5-e8817e62f088 req-cf440ad4-2536-4b92-8be0-bed79e120cad 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-59c4595c-fa0d-4410-9dda-f266cca0c9e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:57:48 compute-0 podman[443027]: 2025-12-03 18:57:48.912750006 +0000 UTC m=+0.082310951 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, managed_by=edpm_ansible, com.redhat.component=ubi9-container)
Dec  3 18:57:48 compute-0 podman[443028]: 2025-12-03 18:57:48.936606318 +0000 UTC m=+0.098938307 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  3 18:57:48 compute-0 podman[443033]: 2025-12-03 18:57:48.940394971 +0000 UTC m=+0.085063878 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:57:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:57:48 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/619879487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:57:49 compute-0 nova_compute[348325]: 2025-12-03 18:57:49.017 348329 DEBUG oslo_concurrency.processutils [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:57:49 compute-0 nova_compute[348325]: 2025-12-03 18:57:49.025 348329 DEBUG nova.compute.provider_tree [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:57:49 compute-0 nova_compute[348325]: 2025-12-03 18:57:49.041 348329 DEBUG nova.scheduler.client.report [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:57:49 compute-0 nova_compute[348325]: 2025-12-03 18:57:49.065 348329 DEBUG oslo_concurrency.lockutils [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:49 compute-0 nova_compute[348325]: 2025-12-03 18:57:49.099 348329 INFO nova.scheduler.client.report [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Deleted allocations for instance 59c4595c-fa0d-4410-9dda-f266cca0c9e4#033[00m
Dec  3 18:57:49 compute-0 nova_compute[348325]: 2025-12-03 18:57:49.205 348329 DEBUG oslo_concurrency.lockutils [None req-e122ff7d-d4b7-4954-85b1-94d756f9f682 5d41669fc94f4811803f4ebf54dbcebc 86bd600007a042cea64439c21bd920b0 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:49 compute-0 nova_compute[348325]: 2025-12-03 18:57:49.339 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764788254.2763402, 67a42a04-754c-489b-9aeb-12d68487d4d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:49 compute-0 nova_compute[348325]: 2025-12-03 18:57:49.339 348329 INFO nova.compute.manager [-] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] VM Stopped (Lifecycle Event)#033[00m
Dec  3 18:57:49 compute-0 nova_compute[348325]: 2025-12-03 18:57:49.356 348329 DEBUG nova.compute.manager [None req-6a7552f7-779c-4fde-86c3-cfc61b3e1167 - - - - - -] [instance: 67a42a04-754c-489b-9aeb-12d68487d4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:49 compute-0 nova_compute[348325]: 2025-12-03 18:57:49.504 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 104 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 31 KiB/s wr, 260 op/s
Dec  3 18:57:49 compute-0 nova_compute[348325]: 2025-12-03 18:57:49.779 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:50 compute-0 nova_compute[348325]: 2025-12-03 18:57:50.276 348329 DEBUG nova.compute.manager [req-c6ad7191-4936-481f-bac8-2f8cbc350ac4 req-da6f3b6e-785f-454e-8cbf-db6ff9081aa6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Received event network-vif-deleted-df320f97-b085-4528-84d7-d0b7e40923a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:50 compute-0 nova_compute[348325]: 2025-12-03 18:57:50.277 348329 DEBUG nova.compute.manager [req-c6ad7191-4936-481f-bac8-2f8cbc350ac4 req-da6f3b6e-785f-454e-8cbf-db6ff9081aa6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Received event network-vif-plugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:50 compute-0 nova_compute[348325]: 2025-12-03 18:57:50.277 348329 DEBUG oslo_concurrency.lockutils [req-c6ad7191-4936-481f-bac8-2f8cbc350ac4 req-da6f3b6e-785f-454e-8cbf-db6ff9081aa6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:57:50 compute-0 nova_compute[348325]: 2025-12-03 18:57:50.277 348329 DEBUG oslo_concurrency.lockutils [req-c6ad7191-4936-481f-bac8-2f8cbc350ac4 req-da6f3b6e-785f-454e-8cbf-db6ff9081aa6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:57:50 compute-0 nova_compute[348325]: 2025-12-03 18:57:50.278 348329 DEBUG oslo_concurrency.lockutils [req-c6ad7191-4936-481f-bac8-2f8cbc350ac4 req-da6f3b6e-785f-454e-8cbf-db6ff9081aa6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "59c4595c-fa0d-4410-9dda-f266cca0c9e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:57:50 compute-0 nova_compute[348325]: 2025-12-03 18:57:50.278 348329 DEBUG nova.compute.manager [req-c6ad7191-4936-481f-bac8-2f8cbc350ac4 req-da6f3b6e-785f-454e-8cbf-db6ff9081aa6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] No waiting events found dispatching network-vif-plugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:57:50 compute-0 nova_compute[348325]: 2025-12-03 18:57:50.278 348329 WARNING nova.compute.manager [req-c6ad7191-4936-481f-bac8-2f8cbc350ac4 req-da6f3b6e-785f-454e-8cbf-db6ff9081aa6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Received unexpected event network-vif-plugged-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf for instance with vm_state deleted and task_state None.#033[00m
Dec  3 18:57:50 compute-0 nova_compute[348325]: 2025-12-03 18:57:50.278 348329 DEBUG nova.compute.manager [req-c6ad7191-4936-481f-bac8-2f8cbc350ac4 req-da6f3b6e-785f-454e-8cbf-db6ff9081aa6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Received event network-vif-deleted-fa1f26e3-cb99-46c5-b405-4fbdc024f8cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:57:50 compute-0 nova_compute[348325]: 2025-12-03 18:57:50.279 348329 INFO nova.compute.manager [req-c6ad7191-4936-481f-bac8-2f8cbc350ac4 req-da6f3b6e-785f-454e-8cbf-db6ff9081aa6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Neutron deleted interface fa1f26e3-cb99-46c5-b405-4fbdc024f8cf; detaching it from the instance and deleting it from the info cache#033[00m
Dec  3 18:57:50 compute-0 nova_compute[348325]: 2025-12-03 18:57:50.279 348329 DEBUG nova.network.neutron [req-c6ad7191-4936-481f-bac8-2f8cbc350ac4 req-da6f3b6e-785f-454e-8cbf-db6ff9081aa6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec  3 18:57:50 compute-0 nova_compute[348325]: 2025-12-03 18:57:50.283 348329 DEBUG nova.compute.manager [req-c6ad7191-4936-481f-bac8-2f8cbc350ac4 req-da6f3b6e-785f-454e-8cbf-db6ff9081aa6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Detach interface failed, port_id=fa1f26e3-cb99-46c5-b405-4fbdc024f8cf, reason: Instance 59c4595c-fa0d-4410-9dda-f266cca0c9e4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  3 18:57:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 103 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 17 KiB/s wr, 259 op/s
Dec  3 18:57:53 compute-0 nova_compute[348325]: 2025-12-03 18:57:53.360 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 103 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.3 KiB/s wr, 184 op/s
Dec  3 18:57:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:54 compute-0 nova_compute[348325]: 2025-12-03 18:57:54.509 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:55 compute-0 ovn_controller[89305]: 2025-12-03T18:57:55Z|00107|binding|INFO|Releasing lport b52268a2-5f2a-45ba-8c23-e32c70c8253f from this chassis (sb_readonly=0)
Dec  3 18:57:55 compute-0 nova_compute[348325]: 2025-12-03 18:57:55.672 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 103 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 KiB/s wr, 115 op/s
Dec  3 18:57:55 compute-0 ovn_controller[89305]: 2025-12-03T18:57:55Z|00108|binding|INFO|Releasing lport b52268a2-5f2a-45ba-8c23-e32c70c8253f from this chassis (sb_readonly=0)
Dec  3 18:57:55 compute-0 nova_compute[348325]: 2025-12-03 18:57:55.824 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 103 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 695 KiB/s rd, 2.0 KiB/s wr, 61 op/s
Dec  3 18:57:57 compute-0 nova_compute[348325]: 2025-12-03 18:57:57.788 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:58 compute-0 nova_compute[348325]: 2025-12-03 18:57:58.368 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 18:57:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:57:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:57:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:57:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:57:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:57:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:57:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:57:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev bb66917c-0f58-4101-87c2-2cc703880dfc does not exist
Dec  3 18:57:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9bd16f9e-2444-41e3-a0e3-ce1ffd00ee40 does not exist
Dec  3 18:57:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e6f4b50b-b0c5-44ad-a0e0-9a92719f7906 does not exist
Dec  3 18:57:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:57:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:57:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:57:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:57:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:57:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:57:58 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 18:57:58 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:57:58 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:57:58 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:57:59 compute-0 podman[443360]: 2025-12-03 18:57:59.152242484 +0000 UTC m=+0.055256000 container create 72e6c769b498151d1a1e6ad90775ad38b4e8e0dd72ab0a103f6f2097a4c36f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:57:59 compute-0 systemd[1]: Started libpod-conmon-72e6c769b498151d1a1e6ad90775ad38b4e8e0dd72ab0a103f6f2097a4c36f47.scope.
Dec  3 18:57:59 compute-0 podman[443360]: 2025-12-03 18:57:59.126961327 +0000 UTC m=+0.029974873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:57:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:57:59 compute-0 podman[443360]: 2025-12-03 18:57:59.242677712 +0000 UTC m=+0.145691248 container init 72e6c769b498151d1a1e6ad90775ad38b4e8e0dd72ab0a103f6f2097a4c36f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:57:59 compute-0 podman[443360]: 2025-12-03 18:57:59.251878546 +0000 UTC m=+0.154892062 container start 72e6c769b498151d1a1e6ad90775ad38b4e8e0dd72ab0a103f6f2097a4c36f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:57:59 compute-0 podman[443360]: 2025-12-03 18:57:59.256565961 +0000 UTC m=+0.159579497 container attach 72e6c769b498151d1a1e6ad90775ad38b4e8e0dd72ab0a103f6f2097a4c36f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 18:57:59 compute-0 goofy_archimedes[443375]: 167 167
Dec  3 18:57:59 compute-0 systemd[1]: libpod-72e6c769b498151d1a1e6ad90775ad38b4e8e0dd72ab0a103f6f2097a4c36f47.scope: Deactivated successfully.
Dec  3 18:57:59 compute-0 podman[443360]: 2025-12-03 18:57:59.261226595 +0000 UTC m=+0.164240121 container died 72e6c769b498151d1a1e6ad90775ad38b4e8e0dd72ab0a103f6f2097a4c36f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-15776c615198fa9da21705643844efeee615a8a8168c021d2db464f1772a4014-merged.mount: Deactivated successfully.
Dec  3 18:57:59 compute-0 podman[443360]: 2025-12-03 18:57:59.306625483 +0000 UTC m=+0.209638999 container remove 72e6c769b498151d1a1e6ad90775ad38b4e8e0dd72ab0a103f6f2097a4c36f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:57:59 compute-0 systemd[1]: libpod-conmon-72e6c769b498151d1a1e6ad90775ad38b4e8e0dd72ab0a103f6f2097a4c36f47.scope: Deactivated successfully.
Dec  3 18:57:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:57:59 compute-0 nova_compute[348325]: 2025-12-03 18:57:59.447 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764788264.4461768, 59c4595c-fa0d-4410-9dda-f266cca0c9e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:59 compute-0 nova_compute[348325]: 2025-12-03 18:57:59.449 348329 INFO nova.compute.manager [-] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] VM Stopped (Lifecycle Event)#033[00m
Dec  3 18:57:59 compute-0 nova_compute[348325]: 2025-12-03 18:57:59.456 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764788264.4555662, 47c940fc-9b39-48b6-a183-42c0547ac964 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:57:59 compute-0 nova_compute[348325]: 2025-12-03 18:57:59.456 348329 INFO nova.compute.manager [-] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] VM Stopped (Lifecycle Event)#033[00m
Dec  3 18:57:59 compute-0 nova_compute[348325]: 2025-12-03 18:57:59.481 348329 DEBUG nova.compute.manager [None req-38387642-8ad1-4554-84d9-1e2ac0f353fa - - - - - -] [instance: 59c4595c-fa0d-4410-9dda-f266cca0c9e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:59 compute-0 nova_compute[348325]: 2025-12-03 18:57:59.484 348329 DEBUG nova.compute.manager [None req-761f080e-d230-4f7d-9a20-8b69c68b8caa - - - - - -] [instance: 47c940fc-9b39-48b6-a183-42c0547ac964] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:57:59 compute-0 podman[443400]: 2025-12-03 18:57:59.510262465 +0000 UTC m=+0.060472987 container create 267ec45b4f5d07083f157674cda3c04bcb5fbf6fc5a222ca1c41814992f55baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:57:59 compute-0 nova_compute[348325]: 2025-12-03 18:57:59.513 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:57:59 compute-0 systemd[1]: Started libpod-conmon-267ec45b4f5d07083f157674cda3c04bcb5fbf6fc5a222ca1c41814992f55baa.scope.
Dec  3 18:57:59 compute-0 podman[443400]: 2025-12-03 18:57:59.48592138 +0000 UTC m=+0.036131912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:57:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08ce8adca8b3129b46cb56115579d378cc60ee9ed4ffeb173a8eaf3e311f8132/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08ce8adca8b3129b46cb56115579d378cc60ee9ed4ffeb173a8eaf3e311f8132/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08ce8adca8b3129b46cb56115579d378cc60ee9ed4ffeb173a8eaf3e311f8132/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08ce8adca8b3129b46cb56115579d378cc60ee9ed4ffeb173a8eaf3e311f8132/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08ce8adca8b3129b46cb56115579d378cc60ee9ed4ffeb173a8eaf3e311f8132/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:57:59 compute-0 podman[443400]: 2025-12-03 18:57:59.657953311 +0000 UTC m=+0.208163833 container init 267ec45b4f5d07083f157674cda3c04bcb5fbf6fc5a222ca1c41814992f55baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 18:57:59 compute-0 podman[443400]: 2025-12-03 18:57:59.673971722 +0000 UTC m=+0.224182234 container start 267ec45b4f5d07083f157674cda3c04bcb5fbf6fc5a222ca1c41814992f55baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 18:57:59 compute-0 podman[443400]: 2025-12-03 18:57:59.677951 +0000 UTC m=+0.228161512 container attach 267ec45b4f5d07083f157674cda3c04bcb5fbf6fc5a222ca1c41814992f55baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 18:57:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 103 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1023 B/s wr, 19 op/s
Dec  3 18:57:59 compute-0 podman[158200]: time="2025-12-03T18:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:57:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45516 "" "Go-http-client/1.1"
Dec  3 18:57:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9058 "" "Go-http-client/1.1"
Dec  3 18:58:00 compute-0 vigorous_haibt[443416]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:58:00 compute-0 vigorous_haibt[443416]: --> relative data size: 1.0
Dec  3 18:58:00 compute-0 vigorous_haibt[443416]: --> All data devices are unavailable
Dec  3 18:58:00 compute-0 systemd[1]: libpod-267ec45b4f5d07083f157674cda3c04bcb5fbf6fc5a222ca1c41814992f55baa.scope: Deactivated successfully.
Dec  3 18:58:00 compute-0 systemd[1]: libpod-267ec45b4f5d07083f157674cda3c04bcb5fbf6fc5a222ca1c41814992f55baa.scope: Consumed 1.032s CPU time.
Dec  3 18:58:00 compute-0 nova_compute[348325]: 2025-12-03 18:58:00.802 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquiring lock "c9937213-8842-4393-90b0-edb363037633" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:00 compute-0 nova_compute[348325]: 2025-12-03 18:58:00.804 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "c9937213-8842-4393-90b0-edb363037633" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:00 compute-0 podman[443445]: 2025-12-03 18:58:00.827521287 +0000 UTC m=+0.052262948 container died 267ec45b4f5d07083f157674cda3c04bcb5fbf6fc5a222ca1c41814992f55baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:58:00 compute-0 nova_compute[348325]: 2025-12-03 18:58:00.842 348329 DEBUG nova.compute.manager [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:58:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-08ce8adca8b3129b46cb56115579d378cc60ee9ed4ffeb173a8eaf3e311f8132-merged.mount: Deactivated successfully.
Dec  3 18:58:00 compute-0 podman[443446]: 2025-12-03 18:58:00.889294405 +0000 UTC m=+0.105634411 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:58:00 compute-0 podman[443445]: 2025-12-03 18:58:00.920029085 +0000 UTC m=+0.144770666 container remove 267ec45b4f5d07083f157674cda3c04bcb5fbf6fc5a222ca1c41814992f55baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 18:58:00 compute-0 nova_compute[348325]: 2025-12-03 18:58:00.925 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:00 compute-0 nova_compute[348325]: 2025-12-03 18:58:00.926 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:00 compute-0 systemd[1]: libpod-conmon-267ec45b4f5d07083f157674cda3c04bcb5fbf6fc5a222ca1c41814992f55baa.scope: Deactivated successfully.
Dec  3 18:58:00 compute-0 nova_compute[348325]: 2025-12-03 18:58:00.943 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:58:00 compute-0 nova_compute[348325]: 2025-12-03 18:58:00.943 348329 INFO nova.compute.claims [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:58:01 compute-0 nova_compute[348325]: 2025-12-03 18:58:01.294 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:01 compute-0 openstack_network_exporter[365222]: ERROR   18:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:58:01 compute-0 openstack_network_exporter[365222]: ERROR   18:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:58:01 compute-0 openstack_network_exporter[365222]: ERROR   18:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:58:01 compute-0 openstack_network_exporter[365222]: ERROR   18:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:58:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:58:01 compute-0 openstack_network_exporter[365222]: ERROR   18:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:58:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:58:01 compute-0 nova_compute[348325]: 2025-12-03 18:58:01.499 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 103 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 682 B/s wr, 7 op/s
Dec  3 18:58:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:58:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876302577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:58:01 compute-0 podman[443638]: 2025-12-03 18:58:01.760044664 +0000 UTC m=+0.069745684 container create e745eca043ef4c2f5d01b51d739b628a6e63b2708e6743d85db3fa05e427e948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:58:01 compute-0 nova_compute[348325]: 2025-12-03 18:58:01.790 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:01 compute-0 nova_compute[348325]: 2025-12-03 18:58:01.801 348329 DEBUG nova.compute.provider_tree [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:58:01 compute-0 podman[443638]: 2025-12-03 18:58:01.729294143 +0000 UTC m=+0.038995243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:58:01 compute-0 nova_compute[348325]: 2025-12-03 18:58:01.824 348329 DEBUG nova.scheduler.client.report [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:58:01 compute-0 systemd[1]: Started libpod-conmon-e745eca043ef4c2f5d01b51d739b628a6e63b2708e6743d85db3fa05e427e948.scope.
Dec  3 18:58:01 compute-0 nova_compute[348325]: 2025-12-03 18:58:01.861 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:01 compute-0 nova_compute[348325]: 2025-12-03 18:58:01.862 348329 DEBUG nova.compute.manager [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:58:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:58:01 compute-0 podman[443638]: 2025-12-03 18:58:01.907362161 +0000 UTC m=+0.217063271 container init e745eca043ef4c2f5d01b51d739b628a6e63b2708e6743d85db3fa05e427e948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 18:58:01 compute-0 podman[443638]: 2025-12-03 18:58:01.917199361 +0000 UTC m=+0.226900391 container start e745eca043ef4c2f5d01b51d739b628a6e63b2708e6743d85db3fa05e427e948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 18:58:01 compute-0 serene_margulis[443656]: 167 167
Dec  3 18:58:01 compute-0 podman[443638]: 2025-12-03 18:58:01.925668418 +0000 UTC m=+0.235369528 container attach e745eca043ef4c2f5d01b51d739b628a6e63b2708e6743d85db3fa05e427e948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:58:01 compute-0 systemd[1]: libpod-e745eca043ef4c2f5d01b51d739b628a6e63b2708e6743d85db3fa05e427e948.scope: Deactivated successfully.
Dec  3 18:58:01 compute-0 podman[443638]: 2025-12-03 18:58:01.928799024 +0000 UTC m=+0.238500114 container died e745eca043ef4c2f5d01b51d739b628a6e63b2708e6743d85db3fa05e427e948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 18:58:01 compute-0 nova_compute[348325]: 2025-12-03 18:58:01.952 348329 DEBUG nova.compute.manager [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 18:58:01 compute-0 nova_compute[348325]: 2025-12-03 18:58:01.953 348329 DEBUG nova.network.neutron [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 18:58:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f845779b0da743e6cf63c620f3a3409d04888c9a177d239539882b4d171f01c-merged.mount: Deactivated successfully.
Dec  3 18:58:01 compute-0 nova_compute[348325]: 2025-12-03 18:58:01.977 348329 INFO nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:58:01 compute-0 podman[443638]: 2025-12-03 18:58:01.988266706 +0000 UTC m=+0.297967726 container remove e745eca043ef4c2f5d01b51d739b628a6e63b2708e6743d85db3fa05e427e948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:01.998 348329 DEBUG nova.compute.manager [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:58:02 compute-0 systemd[1]: libpod-conmon-e745eca043ef4c2f5d01b51d739b628a6e63b2708e6743d85db3fa05e427e948.scope: Deactivated successfully.
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.132 348329 DEBUG nova.compute.manager [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.133 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.133 348329 INFO nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Creating image(s)#033[00m
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.181 348329 DEBUG nova.storage.rbd_utils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] rbd image c9937213-8842-4393-90b0-edb363037633_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.226 348329 DEBUG nova.storage.rbd_utils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] rbd image c9937213-8842-4393-90b0-edb363037633_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:02 compute-0 podman[443693]: 2025-12-03 18:58:02.232900388 +0000 UTC m=+0.065812668 container create ad65c6555dcc3c2a767486207def8819d11d9212bde860bf67832f306f00ca47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.280 348329 DEBUG nova.storage.rbd_utils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] rbd image c9937213-8842-4393-90b0-edb363037633_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:02 compute-0 podman[443693]: 2025-12-03 18:58:02.197612477 +0000 UTC m=+0.030524787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.294 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:02 compute-0 systemd[1]: Started libpod-conmon-ad65c6555dcc3c2a767486207def8819d11d9212bde860bf67832f306f00ca47.scope.
Dec  3 18:58:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9478b026a5d8a3e38f7f30b3b3f75f0debd3585ffdfcdaf91b3ed14bd1e672a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9478b026a5d8a3e38f7f30b3b3f75f0debd3585ffdfcdaf91b3ed14bd1e672a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9478b026a5d8a3e38f7f30b3b3f75f0debd3585ffdfcdaf91b3ed14bd1e672a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9478b026a5d8a3e38f7f30b3b3f75f0debd3585ffdfcdaf91b3ed14bd1e672a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.373 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.374 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquiring lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.375 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.375 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:02 compute-0 podman[443693]: 2025-12-03 18:58:02.377020997 +0000 UTC m=+0.209933287 container init ad65c6555dcc3c2a767486207def8819d11d9212bde860bf67832f306f00ca47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 18:58:02 compute-0 podman[443693]: 2025-12-03 18:58:02.390303972 +0000 UTC m=+0.223216262 container start ad65c6555dcc3c2a767486207def8819d11d9212bde860bf67832f306f00ca47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 18:58:02 compute-0 podman[443693]: 2025-12-03 18:58:02.396212736 +0000 UTC m=+0.229125036 container attach ad65c6555dcc3c2a767486207def8819d11d9212bde860bf67832f306f00ca47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.420 348329 DEBUG nova.storage.rbd_utils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] rbd image c9937213-8842-4393-90b0-edb363037633_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.432 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 c9937213-8842-4393-90b0-edb363037633_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.492 348329 DEBUG nova.policy [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '78734fd37e3f4665b1cb2cbcba2e9f65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '82b2746c38174502bdcb70a8ab378edf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.873 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 c9937213-8842-4393-90b0-edb363037633_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:02 compute-0 nova_compute[348325]: 2025-12-03 18:58:02.982 348329 DEBUG nova.storage.rbd_utils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] resizing rbd image c9937213-8842-4393-90b0-edb363037633_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:58:03 compute-0 keen_williamson[443751]: {
Dec  3 18:58:03 compute-0 keen_williamson[443751]:    "0": [
Dec  3 18:58:03 compute-0 keen_williamson[443751]:        {
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "devices": [
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "/dev/loop3"
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            ],
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_name": "ceph_lv0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_size": "21470642176",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "name": "ceph_lv0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "tags": {
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.cluster_name": "ceph",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.crush_device_class": "",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.encrypted": "0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.osd_id": "0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.type": "block",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.vdo": "0"
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            },
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "type": "block",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "vg_name": "ceph_vg0"
Dec  3 18:58:03 compute-0 keen_williamson[443751]:        }
Dec  3 18:58:03 compute-0 keen_williamson[443751]:    ],
Dec  3 18:58:03 compute-0 keen_williamson[443751]:    "1": [
Dec  3 18:58:03 compute-0 keen_williamson[443751]:        {
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "devices": [
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "/dev/loop4"
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            ],
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_name": "ceph_lv1",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_size": "21470642176",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "name": "ceph_lv1",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "tags": {
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.cluster_name": "ceph",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.crush_device_class": "",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.encrypted": "0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.osd_id": "1",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.type": "block",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.vdo": "0"
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            },
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "type": "block",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "vg_name": "ceph_vg1"
Dec  3 18:58:03 compute-0 keen_williamson[443751]:        }
Dec  3 18:58:03 compute-0 keen_williamson[443751]:    ],
Dec  3 18:58:03 compute-0 keen_williamson[443751]:    "2": [
Dec  3 18:58:03 compute-0 keen_williamson[443751]:        {
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "devices": [
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "/dev/loop5"
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            ],
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_name": "ceph_lv2",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_size": "21470642176",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "name": "ceph_lv2",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "tags": {
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.cluster_name": "ceph",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.crush_device_class": "",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.encrypted": "0",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.osd_id": "2",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.type": "block",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:                "ceph.vdo": "0"
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            },
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "type": "block",
Dec  3 18:58:03 compute-0 keen_williamson[443751]:            "vg_name": "ceph_vg2"
Dec  3 18:58:03 compute-0 keen_williamson[443751]:        }
Dec  3 18:58:03 compute-0 keen_williamson[443751]:    ]
Dec  3 18:58:03 compute-0 keen_williamson[443751]: }
Dec  3 18:58:03 compute-0 systemd[1]: libpod-ad65c6555dcc3c2a767486207def8819d11d9212bde860bf67832f306f00ca47.scope: Deactivated successfully.
Dec  3 18:58:03 compute-0 nova_compute[348325]: 2025-12-03 18:58:03.215 348329 DEBUG nova.objects.instance [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lazy-loading 'migration_context' on Instance uuid c9937213-8842-4393-90b0-edb363037633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:58:03 compute-0 nova_compute[348325]: 2025-12-03 18:58:03.235 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:58:03 compute-0 nova_compute[348325]: 2025-12-03 18:58:03.236 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Ensure instance console log exists: /var/lib/nova/instances/c9937213-8842-4393-90b0-edb363037633/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:58:03 compute-0 nova_compute[348325]: 2025-12-03 18:58:03.237 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:03 compute-0 nova_compute[348325]: 2025-12-03 18:58:03.237 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:03 compute-0 nova_compute[348325]: 2025-12-03 18:58:03.238 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:03 compute-0 podman[443871]: 2025-12-03 18:58:03.247829858 +0000 UTC m=+0.043073362 container died ad65c6555dcc3c2a767486207def8819d11d9212bde860bf67832f306f00ca47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 18:58:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-9478b026a5d8a3e38f7f30b3b3f75f0debd3585ffdfcdaf91b3ed14bd1e672a5-merged.mount: Deactivated successfully.
Dec  3 18:58:03 compute-0 podman[443871]: 2025-12-03 18:58:03.321375514 +0000 UTC m=+0.116618968 container remove ad65c6555dcc3c2a767486207def8819d11d9212bde860bf67832f306f00ca47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 18:58:03 compute-0 podman[443878]: 2025-12-03 18:58:03.328792225 +0000 UTC m=+0.105399474 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:58:03 compute-0 systemd[1]: libpod-conmon-ad65c6555dcc3c2a767486207def8819d11d9212bde860bf67832f306f00ca47.scope: Deactivated successfully.
Dec  3 18:58:03 compute-0 nova_compute[348325]: 2025-12-03 18:58:03.364 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:03 compute-0 podman[443872]: 2025-12-03 18:58:03.39413139 +0000 UTC m=+0.170339290 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  3 18:58:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 120 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 453 KiB/s wr, 1 op/s
Dec  3 18:58:04 compute-0 podman[444066]: 2025-12-03 18:58:04.105257293 +0000 UTC m=+0.067219752 container create f409d05e39ca59b37812d8c411bad8c9e24523dd6b17591237b3b745d3952638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lalande, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:58:04 compute-0 nova_compute[348325]: 2025-12-03 18:58:04.155 348329 DEBUG nova.network.neutron [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Successfully created port: 2c007b4e-e674-4c1f-becb-67fc1b96681b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 18:58:04 compute-0 podman[444066]: 2025-12-03 18:58:04.078959721 +0000 UTC m=+0.040922200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:58:04 compute-0 systemd[1]: Started libpod-conmon-f409d05e39ca59b37812d8c411bad8c9e24523dd6b17591237b3b745d3952638.scope.
Dec  3 18:58:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:58:04 compute-0 podman[444066]: 2025-12-03 18:58:04.252653671 +0000 UTC m=+0.214616170 container init f409d05e39ca59b37812d8c411bad8c9e24523dd6b17591237b3b745d3952638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lalande, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:58:04 compute-0 podman[444066]: 2025-12-03 18:58:04.271161323 +0000 UTC m=+0.233123822 container start f409d05e39ca59b37812d8c411bad8c9e24523dd6b17591237b3b745d3952638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 18:58:04 compute-0 podman[444066]: 2025-12-03 18:58:04.277740424 +0000 UTC m=+0.239702903 container attach f409d05e39ca59b37812d8c411bad8c9e24523dd6b17591237b3b745d3952638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 18:58:04 compute-0 jovial_lalande[444081]: 167 167
Dec  3 18:58:04 compute-0 podman[444066]: 2025-12-03 18:58:04.289638474 +0000 UTC m=+0.251601043 container died f409d05e39ca59b37812d8c411bad8c9e24523dd6b17591237b3b745d3952638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lalande, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 18:58:04 compute-0 systemd[1]: libpod-f409d05e39ca59b37812d8c411bad8c9e24523dd6b17591237b3b745d3952638.scope: Deactivated successfully.
Dec  3 18:58:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b29654339f8fa5bec5c9ebfc20eb8b3ec3949dbe8daec23ccc43f0bb755c75b9-merged.mount: Deactivated successfully.
Dec  3 18:58:04 compute-0 podman[444066]: 2025-12-03 18:58:04.384324826 +0000 UTC m=+0.346287325 container remove f409d05e39ca59b37812d8c411bad8c9e24523dd6b17591237b3b745d3952638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 18:58:04 compute-0 systemd[1]: libpod-conmon-f409d05e39ca59b37812d8c411bad8c9e24523dd6b17591237b3b745d3952638.scope: Deactivated successfully.
Dec  3 18:58:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 18:58:04 compute-0 nova_compute[348325]: 2025-12-03 18:58:04.517 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:04 compute-0 podman[444104]: 2025-12-03 18:58:04.621873926 +0000 UTC m=+0.055915556 container create 74efa7a476d6540752d33b78d91a7bf11df948d1274ea24de84137d3b4b5474d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:58:04 compute-0 systemd[1]: Started libpod-conmon-74efa7a476d6540752d33b78d91a7bf11df948d1274ea24de84137d3b4b5474d.scope.
Dec  3 18:58:04 compute-0 podman[444104]: 2025-12-03 18:58:04.598547416 +0000 UTC m=+0.032589076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:58:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9035f546c0c985aec867cca0c5692fe5d6758f70af0862bcad31a0cd0cdbc2e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9035f546c0c985aec867cca0c5692fe5d6758f70af0862bcad31a0cd0cdbc2e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9035f546c0c985aec867cca0c5692fe5d6758f70af0862bcad31a0cd0cdbc2e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9035f546c0c985aec867cca0c5692fe5d6758f70af0862bcad31a0cd0cdbc2e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:58:04 compute-0 podman[444104]: 2025-12-03 18:58:04.738934623 +0000 UTC m=+0.172976263 container init 74efa7a476d6540752d33b78d91a7bf11df948d1274ea24de84137d3b4b5474d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 18:58:04 compute-0 podman[444104]: 2025-12-03 18:58:04.758231055 +0000 UTC m=+0.192272685 container start 74efa7a476d6540752d33b78d91a7bf11df948d1274ea24de84137d3b4b5474d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 18:58:04 compute-0 podman[444104]: 2025-12-03 18:58:04.762631172 +0000 UTC m=+0.196672802 container attach 74efa7a476d6540752d33b78d91a7bf11df948d1274ea24de84137d3b4b5474d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:58:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec  3 18:58:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec  3 18:58:04 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec  3 18:58:05 compute-0 nova_compute[348325]: 2025-12-03 18:58:05.686 348329 DEBUG nova.network.neutron [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Successfully updated port: 2c007b4e-e674-4c1f-becb-67fc1b96681b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 18:58:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 142 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Dec  3 18:58:05 compute-0 nova_compute[348325]: 2025-12-03 18:58:05.710 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquiring lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:58:05 compute-0 nova_compute[348325]: 2025-12-03 18:58:05.712 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquired lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:58:05 compute-0 nova_compute[348325]: 2025-12-03 18:58:05.712 348329 DEBUG nova.network.neutron [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]: {
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "osd_id": 1,
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "type": "bluestore"
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:    },
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "osd_id": 2,
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "type": "bluestore"
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:    },
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "osd_id": 0,
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:        "type": "bluestore"
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]:    }
Dec  3 18:58:05 compute-0 elegant_elgamal[444119]: }
Dec  3 18:58:05 compute-0 systemd[1]: libpod-74efa7a476d6540752d33b78d91a7bf11df948d1274ea24de84137d3b4b5474d.scope: Deactivated successfully.
Dec  3 18:58:05 compute-0 podman[444104]: 2025-12-03 18:58:05.870971303 +0000 UTC m=+1.305012953 container died 74efa7a476d6540752d33b78d91a7bf11df948d1274ea24de84137d3b4b5474d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 18:58:05 compute-0 systemd[1]: libpod-74efa7a476d6540752d33b78d91a7bf11df948d1274ea24de84137d3b4b5474d.scope: Consumed 1.089s CPU time.
Dec  3 18:58:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9035f546c0c985aec867cca0c5692fe5d6758f70af0862bcad31a0cd0cdbc2e1-merged.mount: Deactivated successfully.
Dec  3 18:58:05 compute-0 podman[444104]: 2025-12-03 18:58:05.942907439 +0000 UTC m=+1.376949069 container remove 74efa7a476d6540752d33b78d91a7bf11df948d1274ea24de84137d3b4b5474d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 18:58:05 compute-0 systemd[1]: libpod-conmon-74efa7a476d6540752d33b78d91a7bf11df948d1274ea24de84137d3b4b5474d.scope: Deactivated successfully.
Dec  3 18:58:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:58:06 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:58:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:58:06 compute-0 nova_compute[348325]: 2025-12-03 18:58:06.024 348329 DEBUG nova.compute.manager [req-64ceb645-3e23-4703-96da-342a5496d156 req-6ce1838e-9501-4827-a13e-734da6f946ce 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Received event network-changed-2c007b4e-e674-4c1f-becb-67fc1b96681b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:58:06 compute-0 nova_compute[348325]: 2025-12-03 18:58:06.025 348329 DEBUG nova.compute.manager [req-64ceb645-3e23-4703-96da-342a5496d156 req-6ce1838e-9501-4827-a13e-734da6f946ce 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Refreshing instance network info cache due to event network-changed-2c007b4e-e674-4c1f-becb-67fc1b96681b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:58:06 compute-0 nova_compute[348325]: 2025-12-03 18:58:06.025 348329 DEBUG oslo_concurrency.lockutils [req-64ceb645-3e23-4703-96da-342a5496d156 req-6ce1838e-9501-4827-a13e-734da6f946ce 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:58:06 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:58:06 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e52bd6ee-1048-4269-ade1-842169734257 does not exist
Dec  3 18:58:06 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 06ccf771-bdd7-43b0-bd3b-453c8c401d17 does not exist
Dec  3 18:58:06 compute-0 nova_compute[348325]: 2025-12-03 18:58:06.378 348329 DEBUG nova.network.neutron [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:58:06 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:58:06 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:58:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 150 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec  3 18:58:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec  3 18:58:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec  3 18:58:07 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.339 348329 DEBUG nova.network.neutron [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Updating instance_info_cache with network_info: [{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.368 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.373 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Releasing lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.373 348329 DEBUG nova.compute.manager [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Instance network_info: |[{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.374 348329 DEBUG oslo_concurrency.lockutils [req-64ceb645-3e23-4703-96da-342a5496d156 req-6ce1838e-9501-4827-a13e-734da6f946ce 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.374 348329 DEBUG nova.network.neutron [req-64ceb645-3e23-4703-96da-342a5496d156 req-6ce1838e-9501-4827-a13e-734da6f946ce 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Refreshing network info cache for port 2c007b4e-e674-4c1f-becb-67fc1b96681b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.377 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Start _get_guest_xml network_info=[{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '55982930-937b-484e-96ee-69e406a48023'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.394 348329 WARNING nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.402 348329 DEBUG nova.virt.libvirt.host [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.402 348329 DEBUG nova.virt.libvirt.host [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.406 348329 DEBUG nova.virt.libvirt.host [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.407 348329 DEBUG nova.virt.libvirt.host [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.408 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.409 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:56:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a94cfbfb-a20a-4689-ac91-e7436db75880',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.410 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.410 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.410 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.410 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.410 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.411 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.411 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.411 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.411 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.412 348329 DEBUG nova.virt.hardware [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.420 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:58:08 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1090950202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.852 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.911 348329 DEBUG nova.storage.rbd_utils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] rbd image c9937213-8842-4393-90b0-edb363037633_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:08 compute-0 nova_compute[348325]: 2025-12-03 18:58:08.922 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:58:09 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/350161691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.408 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.410 348329 DEBUG nova.virt.libvirt.vif [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:57:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1449486284',display_name='tempest-AttachInterfacesUnderV243Test-server-1449486284',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1449486284',id=10,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDguLuuxXUluMpTAPvWre/y/zCYbb1KHFibFt+PZdnBzNC2rwnEZ8uO6YAoyvDMtumWTT1JVJ8FZld71I9MbTqHtLcUWMLdncY7IzScsLtvRuzNIOeN8N3ta9kELYuUrYw==',key_name='tempest-keypair-1738864924',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='82b2746c38174502bdcb70a8ab378edf',ramdisk_id='',reservation_id='r-ngyn2v0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-699689894',owner_user_name='tempest-AttachInterfacesUnderV243Test-699689894-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:58:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='78734fd37e3f4665b1cb2cbcba2e9f65',uuid=c9937213-8842-4393-90b0-edb363037633,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.411 348329 DEBUG nova.network.os_vif_util [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Converting VIF {"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.412 348329 DEBUG nova.network.os_vif_util [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:33:2c,bridge_name='br-int',has_traffic_filtering=True,id=2c007b4e-e674-4c1f-becb-67fc1b96681b,network=Network(d518f3f9-88f0-4dc2-8769-17ebdac41174),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c007b4e-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.413 348329 DEBUG nova.objects.instance [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lazy-loading 'pci_devices' on Instance uuid c9937213-8842-4393-90b0-edb363037633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.436 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:58:09 compute-0 nova_compute[348325]:  <uuid>c9937213-8842-4393-90b0-edb363037633</uuid>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  <name>instance-0000000a</name>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  <memory>131072</memory>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1449486284</nova:name>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:58:08</nova:creationTime>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <nova:flavor name="m1.nano">
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <nova:memory>128</nova:memory>
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <nova:user uuid="78734fd37e3f4665b1cb2cbcba2e9f65">tempest-AttachInterfacesUnderV243Test-699689894-project-member</nova:user>
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <nova:project uuid="82b2746c38174502bdcb70a8ab378edf">tempest-AttachInterfacesUnderV243Test-699689894</nova:project>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="55982930-937b-484e-96ee-69e406a48023"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <nova:port uuid="2c007b4e-e674-4c1f-becb-67fc1b96681b">
Dec  3 18:58:09 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <system>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <entry name="serial">c9937213-8842-4393-90b0-edb363037633</entry>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <entry name="uuid">c9937213-8842-4393-90b0-edb363037633</entry>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    </system>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  <os>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  </os>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  <features>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  </features>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/c9937213-8842-4393-90b0-edb363037633_disk">
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      </source>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/c9937213-8842-4393-90b0-edb363037633_disk.config">
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      </source>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:58:09 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:7c:33:2c"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <target dev="tap2c007b4e-e6"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/c9937213-8842-4393-90b0-edb363037633/console.log" append="off"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <video>
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    </video>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:58:09 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:58:09 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:58:09 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:58:09 compute-0 nova_compute[348325]: </domain>
Dec  3 18:58:09 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.437 348329 DEBUG nova.compute.manager [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Preparing to wait for external event network-vif-plugged-2c007b4e-e674-4c1f-becb-67fc1b96681b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.437 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquiring lock "c9937213-8842-4393-90b0-edb363037633-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.437 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "c9937213-8842-4393-90b0-edb363037633-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.437 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "c9937213-8842-4393-90b0-edb363037633-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.438 348329 DEBUG nova.virt.libvirt.vif [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:57:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1449486284',display_name='tempest-AttachInterfacesUnderV243Test-server-1449486284',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1449486284',id=10,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDguLuuxXUluMpTAPvWre/y/zCYbb1KHFibFt+PZdnBzNC2rwnEZ8uO6YAoyvDMtumWTT1JVJ8FZld71I9MbTqHtLcUWMLdncY7IzScsLtvRuzNIOeN8N3ta9kELYuUrYw==',key_name='tempest-keypair-1738864924',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='82b2746c38174502bdcb70a8ab378edf',ramdisk_id='',reservation_id='r-ngyn2v0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-699689894',owner_user_name='tempest-AttachInterfacesUnderV243Test-699689894-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:58:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='78734fd37e3f4665b1cb2cbcba2e9f65',uuid=c9937213-8842-4393-90b0-edb363037633,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 18:58:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.438 348329 DEBUG nova.network.os_vif_util [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Converting VIF {"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.440 348329 DEBUG nova.network.os_vif_util [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:33:2c,bridge_name='br-int',has_traffic_filtering=True,id=2c007b4e-e674-4c1f-becb-67fc1b96681b,network=Network(d518f3f9-88f0-4dc2-8769-17ebdac41174),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c007b4e-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.440 348329 DEBUG os_vif [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:33:2c,bridge_name='br-int',has_traffic_filtering=True,id=2c007b4e-e674-4c1f-becb-67fc1b96681b,network=Network(d518f3f9-88f0-4dc2-8769-17ebdac41174),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c007b4e-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.441 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.441 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.441 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.444 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.445 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c007b4e-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.445 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c007b4e-e6, col_values=(('external_ids', {'iface-id': '2c007b4e-e674-4c1f-becb-67fc1b96681b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:33:2c', 'vm-uuid': 'c9937213-8842-4393-90b0-edb363037633'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.448 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.449 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:58:09 compute-0 NetworkManager[49087]: <info>  [1764788289.4496] manager: (tap2c007b4e-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.462 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.463 348329 INFO os_vif [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:33:2c,bridge_name='br-int',has_traffic_filtering=True,id=2c007b4e-e674-4c1f-becb-67fc1b96681b,network=Network(d518f3f9-88f0-4dc2-8769-17ebdac41174),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c007b4e-e6')#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.554 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.555 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.555 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] No VIF found with MAC fa:16:3e:7c:33:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.555 348329 INFO nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Using config drive#033[00m
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.593 348329 DEBUG nova.storage.rbd_utils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] rbd image c9937213-8842-4393-90b0-edb363037633_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 150 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.7 MiB/s wr, 70 op/s
Dec  3 18:58:09 compute-0 nova_compute[348325]: 2025-12-03 18:58:09.737 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:10 compute-0 nova_compute[348325]: 2025-12-03 18:58:10.351 348329 INFO nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Creating config drive at /var/lib/nova/instances/c9937213-8842-4393-90b0-edb363037633/disk.config#033[00m
Dec  3 18:58:10 compute-0 nova_compute[348325]: 2025-12-03 18:58:10.365 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c9937213-8842-4393-90b0-edb363037633/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppt60wh4f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:10 compute-0 nova_compute[348325]: 2025-12-03 18:58:10.498 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c9937213-8842-4393-90b0-edb363037633/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppt60wh4f" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:10 compute-0 nova_compute[348325]: 2025-12-03 18:58:10.544 348329 DEBUG nova.storage.rbd_utils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] rbd image c9937213-8842-4393-90b0-edb363037633_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:10 compute-0 nova_compute[348325]: 2025-12-03 18:58:10.552 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c9937213-8842-4393-90b0-edb363037633/disk.config c9937213-8842-4393-90b0-edb363037633_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:10 compute-0 nova_compute[348325]: 2025-12-03 18:58:10.753 348329 DEBUG oslo_concurrency.processutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c9937213-8842-4393-90b0-edb363037633/disk.config c9937213-8842-4393-90b0-edb363037633_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:10 compute-0 nova_compute[348325]: 2025-12-03 18:58:10.754 348329 INFO nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Deleting local config drive /var/lib/nova/instances/c9937213-8842-4393-90b0-edb363037633/disk.config because it was imported into RBD.#033[00m
Dec  3 18:58:10 compute-0 kernel: tap2c007b4e-e6: entered promiscuous mode
Dec  3 18:58:10 compute-0 ovn_controller[89305]: 2025-12-03T18:58:10Z|00109|binding|INFO|Claiming lport 2c007b4e-e674-4c1f-becb-67fc1b96681b for this chassis.
Dec  3 18:58:10 compute-0 ovn_controller[89305]: 2025-12-03T18:58:10Z|00110|binding|INFO|2c007b4e-e674-4c1f-becb-67fc1b96681b: Claiming fa:16:3e:7c:33:2c 10.100.0.11
Dec  3 18:58:10 compute-0 nova_compute[348325]: 2025-12-03 18:58:10.813 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:10 compute-0 NetworkManager[49087]: <info>  [1764788290.8202] manager: (tap2c007b4e-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Dec  3 18:58:10 compute-0 ovn_controller[89305]: 2025-12-03T18:58:10Z|00111|binding|INFO|Setting lport 2c007b4e-e674-4c1f-becb-67fc1b96681b ovn-installed in OVS
Dec  3 18:58:10 compute-0 nova_compute[348325]: 2025-12-03 18:58:10.838 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:10 compute-0 nova_compute[348325]: 2025-12-03 18:58:10.849 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:10 compute-0 systemd-machined[138702]: New machine qemu-10-instance-0000000a.
Dec  3 18:58:10 compute-0 ovn_controller[89305]: 2025-12-03T18:58:10Z|00112|binding|INFO|Setting lport 2c007b4e-e674-4c1f-becb-67fc1b96681b up in Southbound
Dec  3 18:58:10 compute-0 systemd-udevd[444349]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.860 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:33:2c 10.100.0.11'], port_security=['fa:16:3e:7c:33:2c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c9937213-8842-4393-90b0-edb363037633', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d518f3f9-88f0-4dc2-8769-17ebdac41174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82b2746c38174502bdcb70a8ab378edf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5e73fa03-0484-41f3-9b8b-de18b4035c5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5f1938d-cee6-4d22-8bab-61d58d3ab44b, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=2c007b4e-e674-4c1f-becb-67fc1b96681b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.862 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 2c007b4e-e674-4c1f-becb-67fc1b96681b in datapath d518f3f9-88f0-4dc2-8769-17ebdac41174 bound to our chassis#033[00m
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.865 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d518f3f9-88f0-4dc2-8769-17ebdac41174#033[00m
Dec  3 18:58:10 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.876 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[ded8551d-537d-48b8-8577-6b083faeddaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.877 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd518f3f9-81 in ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.879 411759 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd518f3f9-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.879 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[48e40bb7-f01b-43cf-b440-248c18393e8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.881 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1274cc-f97f-476b-862c-8742b6643fd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:10 compute-0 NetworkManager[49087]: <info>  [1764788290.8875] device (tap2c007b4e-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:58:10 compute-0 NetworkManager[49087]: <info>  [1764788290.8884] device (tap2c007b4e-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.892 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[f474ff33-d58f-457b-9120-0213dd791822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.914 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e4baf751-bae7-40c6-9839-dcda82d069cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.945 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[735b9dc5-42fb-4acc-9105-9719e7e2cd09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:10 compute-0 NetworkManager[49087]: <info>  [1764788290.9559] manager: (tapd518f3f9-80): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.955 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[93e8957d-a502-4454-933e-55c96997d629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:10 compute-0 systemd-udevd[444353]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.986 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[1a97ee70-a82a-455f-9267-ea998f146fb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:10 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:10.993 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4dcbb7-d1b3-4ced-a982-b91ccb7c599a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:11 compute-0 NetworkManager[49087]: <info>  [1764788291.0141] device (tapd518f3f9-80): carrier: link connected
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.021 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3e3b7e-ef89-4366-a345-55252fd96d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.036 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f531dee2-f655-422b-9b83-6fb966ad70f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd518f3f9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:6a:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657095, 'reachable_time': 18508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 444385, 'error': None, 'target': 'ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.049 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[2acc48c4-8b3e-4486-96ea-c9391c5a0674]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:6af0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657095, 'tstamp': 657095}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 444387, 'error': None, 'target': 'ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.062 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a8a476-5a67-494c-8e7d-516879f70705]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd518f3f9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:6a:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657095, 'reachable_time': 18508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 444388, 'error': None, 'target': 'ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.094 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[1fddb883-962d-47e2-b6b6-50a337f36768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.160 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f5278dbb-cc00-4b96-8c1b-d0d500ffafc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.162 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd518f3f9-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.162 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.163 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd518f3f9-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.165 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:11 compute-0 kernel: tapd518f3f9-80: entered promiscuous mode
Dec  3 18:58:11 compute-0 NetworkManager[49087]: <info>  [1764788291.1658] manager: (tapd518f3f9-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.167 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd518f3f9-80, col_values=(('external_ids', {'iface-id': '230273f0-8290-4d7b-8f3b-1217ad9086fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:11 compute-0 ovn_controller[89305]: 2025-12-03T18:58:11Z|00113|binding|INFO|Releasing lport 230273f0-8290-4d7b-8f3b-1217ad9086fb from this chassis (sb_readonly=0)
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.169 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.183 286999 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d518f3f9-88f0-4dc2-8769-17ebdac41174.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d518f3f9-88f0-4dc2-8769-17ebdac41174.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.183 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.184 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[5149720f-5932-4c45-a7c6-16e6384b6718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.185 286999 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: global
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    log         /dev/log local0 debug
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    log-tag     haproxy-metadata-proxy-d518f3f9-88f0-4dc2-8769-17ebdac41174
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    user        root
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    group       root
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    maxconn     1024
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    pidfile     /var/lib/neutron/external/pids/d518f3f9-88f0-4dc2-8769-17ebdac41174.pid.haproxy
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    daemon
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: defaults
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    log global
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    mode http
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    option httplog
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    option dontlognull
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    option http-server-close
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    option forwardfor
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    retries                 3
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    timeout http-request    30s
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    timeout connect         30s
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    timeout client          32s
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    timeout server          32s
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    timeout http-keep-alive 30s
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: listen listener
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    bind 169.254.169.254:80
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]:    http-request add-header X-OVN-Network-ID d518f3f9-88f0-4dc2-8769-17ebdac41174
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 18:58:11 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:11.188 286999 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174', 'env', 'PROCESS_TAG=haproxy-d518f3f9-88f0-4dc2-8769-17ebdac41174', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d518f3f9-88f0-4dc2-8769-17ebdac41174.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.196 348329 DEBUG nova.network.neutron [req-64ceb645-3e23-4703-96da-342a5496d156 req-6ce1838e-9501-4827-a13e-734da6f946ce 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Updated VIF entry in instance network info cache for port 2c007b4e-e674-4c1f-becb-67fc1b96681b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.197 348329 DEBUG nova.network.neutron [req-64ceb645-3e23-4703-96da-342a5496d156 req-6ce1838e-9501-4827-a13e-734da6f946ce 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Updating instance_info_cache with network_info: [{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.228 348329 DEBUG oslo_concurrency.lockutils [req-64ceb645-3e23-4703-96da-342a5496d156 req-6ce1838e-9501-4827-a13e-734da6f946ce 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:58:11 compute-0 podman[444454]: 2025-12-03 18:58:11.630116032 +0000 UTC m=+0.070061732 container create 617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.650 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788291.6496, c9937213-8842-4393-90b0-edb363037633 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.650 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] VM Started (Lifecycle Event)#033[00m
Dec  3 18:58:11 compute-0 systemd[1]: Started libpod-conmon-617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d.scope.
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.680 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.686 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788291.6497276, c9937213-8842-4393-90b0-edb363037633 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.686 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] VM Paused (Lifecycle Event)#033[00m
Dec  3 18:58:11 compute-0 podman[444454]: 2025-12-03 18:58:11.593840946 +0000 UTC m=+0.033786626 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:58:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:58:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 150 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 2.0 MiB/s wr, 97 op/s
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.715 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dacd48a497e518160f856eaab4ae0dead65ae64958a7718d3d16b4d47879d26a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.728 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:58:11 compute-0 podman[444454]: 2025-12-03 18:58:11.735535336 +0000 UTC m=+0.175481016 container init 617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 18:58:11 compute-0 podman[444454]: 2025-12-03 18:58:11.744323011 +0000 UTC m=+0.184268681 container start 617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 18:58:11 compute-0 neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174[444474]: [NOTICE]   (444478) : New worker (444480) forked
Dec  3 18:58:11 compute-0 neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174[444474]: [NOTICE]   (444478) : Loading success.
Dec  3 18:58:11 compute-0 nova_compute[348325]: 2025-12-03 18:58:11.766 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:58:13 compute-0 nova_compute[348325]: 2025-12-03 18:58:13.371 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 154 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 2.8 MiB/s wr, 112 op/s
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:58:13
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'images']
Dec  3 18:58:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.213 348329 DEBUG nova.compute.manager [req-f12fd2c0-ae2f-4a92-9baa-c55a710decf3 req-3a788aa3-b8ea-4143-9ca1-d9f3baa22ca8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Received event network-vif-plugged-2c007b4e-e674-4c1f-becb-67fc1b96681b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.213 348329 DEBUG oslo_concurrency.lockutils [req-f12fd2c0-ae2f-4a92-9baa-c55a710decf3 req-3a788aa3-b8ea-4143-9ca1-d9f3baa22ca8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "c9937213-8842-4393-90b0-edb363037633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.214 348329 DEBUG oslo_concurrency.lockutils [req-f12fd2c0-ae2f-4a92-9baa-c55a710decf3 req-3a788aa3-b8ea-4143-9ca1-d9f3baa22ca8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "c9937213-8842-4393-90b0-edb363037633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.215 348329 DEBUG oslo_concurrency.lockutils [req-f12fd2c0-ae2f-4a92-9baa-c55a710decf3 req-3a788aa3-b8ea-4143-9ca1-d9f3baa22ca8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "c9937213-8842-4393-90b0-edb363037633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.215 348329 DEBUG nova.compute.manager [req-f12fd2c0-ae2f-4a92-9baa-c55a710decf3 req-3a788aa3-b8ea-4143-9ca1-d9f3baa22ca8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Processing event network-vif-plugged-2c007b4e-e674-4c1f-becb-67fc1b96681b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.217 348329 DEBUG nova.compute.manager [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.224 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788294.2241426, c9937213-8842-4393-90b0-edb363037633 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.225 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.230 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.242 348329 INFO nova.virt.libvirt.driver [-] [instance: c9937213-8842-4393-90b0-edb363037633] Instance spawned successfully.#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.243 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.248 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.262 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.268 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.268 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.268 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.269 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.269 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.269 348329 DEBUG nova.virt.libvirt.driver [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.292 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.352 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.357 348329 INFO nova.compute.manager [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Took 12.22 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.357 348329 DEBUG nova.compute.manager [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.414 348329 INFO nova.compute.manager [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Took 13.53 seconds to build instance.#033[00m
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.433 348329 DEBUG oslo_concurrency.lockutils [None req-945862e8-bc2a-4a7a-b8ce-93d2d780ea94 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "c9937213-8842-4393-90b0-edb363037633" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:58:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec  3 18:58:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec  3 18:58:14 compute-0 nova_compute[348325]: 2025-12-03 18:58:14.447 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:14 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec  3 18:58:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:58:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:58:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:58:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:58:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:58:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:58:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:58:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:58:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:58:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:58:14 compute-0 ovn_controller[89305]: 2025-12-03T18:58:14Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:88:19 10.100.0.3
Dec  3 18:58:14 compute-0 ovn_controller[89305]: 2025-12-03T18:58:14Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:88:19 10.100.0.3
Dec  3 18:58:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 168 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 263 KiB/s rd, 2.5 MiB/s wr, 117 op/s
Dec  3 18:58:15 compute-0 podman[444489]: 2025-12-03 18:58:15.880758801 +0000 UTC m=+0.109148506 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 18:58:15 compute-0 podman[444491]: 2025-12-03 18:58:15.884950894 +0000 UTC m=+0.114030526 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, version=9.6, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 18:58:15 compute-0 podman[444490]: 2025-12-03 18:58:15.903410404 +0000 UTC m=+0.139068596 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 18:58:16 compute-0 nova_compute[348325]: 2025-12-03 18:58:16.371 348329 DEBUG nova.compute.manager [req-79576878-6929-4594-b2db-f2fd22024555 req-95177d2b-5517-4f01-8639-0cfaccfbdf38 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Received event network-vif-plugged-2c007b4e-e674-4c1f-becb-67fc1b96681b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:58:16 compute-0 nova_compute[348325]: 2025-12-03 18:58:16.372 348329 DEBUG oslo_concurrency.lockutils [req-79576878-6929-4594-b2db-f2fd22024555 req-95177d2b-5517-4f01-8639-0cfaccfbdf38 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "c9937213-8842-4393-90b0-edb363037633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:16 compute-0 nova_compute[348325]: 2025-12-03 18:58:16.372 348329 DEBUG oslo_concurrency.lockutils [req-79576878-6929-4594-b2db-f2fd22024555 req-95177d2b-5517-4f01-8639-0cfaccfbdf38 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "c9937213-8842-4393-90b0-edb363037633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:16 compute-0 nova_compute[348325]: 2025-12-03 18:58:16.373 348329 DEBUG oslo_concurrency.lockutils [req-79576878-6929-4594-b2db-f2fd22024555 req-95177d2b-5517-4f01-8639-0cfaccfbdf38 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "c9937213-8842-4393-90b0-edb363037633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:16 compute-0 nova_compute[348325]: 2025-12-03 18:58:16.373 348329 DEBUG nova.compute.manager [req-79576878-6929-4594-b2db-f2fd22024555 req-95177d2b-5517-4f01-8639-0cfaccfbdf38 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] No waiting events found dispatching network-vif-plugged-2c007b4e-e674-4c1f-becb-67fc1b96681b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:58:16 compute-0 nova_compute[348325]: 2025-12-03 18:58:16.374 348329 WARNING nova.compute.manager [req-79576878-6929-4594-b2db-f2fd22024555 req-95177d2b-5517-4f01-8639-0cfaccfbdf38 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Received unexpected event network-vif-plugged-2c007b4e-e674-4c1f-becb-67fc1b96681b for instance with vm_state active and task_state None.#033[00m
Dec  3 18:58:16 compute-0 nova_compute[348325]: 2025-12-03 18:58:16.650 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:58:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 177 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 2.6 MiB/s wr, 105 op/s
Dec  3 18:58:18 compute-0 nova_compute[348325]: 2025-12-03 18:58:18.376 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:18 compute-0 nova_compute[348325]: 2025-12-03 18:58:18.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:58:19 compute-0 nova_compute[348325]: 2025-12-03 18:58:19.224 348329 DEBUG nova.compute.manager [req-00a60ed6-91c5-4b23-bec6-a155ba2f492b req-29f2cf07-0841-4f8e-b39f-91574fe6c250 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Received event network-changed-2c007b4e-e674-4c1f-becb-67fc1b96681b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:58:19 compute-0 nova_compute[348325]: 2025-12-03 18:58:19.224 348329 DEBUG nova.compute.manager [req-00a60ed6-91c5-4b23-bec6-a155ba2f492b req-29f2cf07-0841-4f8e-b39f-91574fe6c250 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Refreshing instance network info cache due to event network-changed-2c007b4e-e674-4c1f-becb-67fc1b96681b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:58:19 compute-0 nova_compute[348325]: 2025-12-03 18:58:19.225 348329 DEBUG oslo_concurrency.lockutils [req-00a60ed6-91c5-4b23-bec6-a155ba2f492b req-29f2cf07-0841-4f8e-b39f-91574fe6c250 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:58:19 compute-0 nova_compute[348325]: 2025-12-03 18:58:19.225 348329 DEBUG oslo_concurrency.lockutils [req-00a60ed6-91c5-4b23-bec6-a155ba2f492b req-29f2cf07-0841-4f8e-b39f-91574fe6c250 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:58:19 compute-0 nova_compute[348325]: 2025-12-03 18:58:19.225 348329 DEBUG nova.network.neutron [req-00a60ed6-91c5-4b23-bec6-a155ba2f492b req-29f2cf07-0841-4f8e-b39f-91574fe6c250 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Refreshing network info cache for port 2c007b4e-e674-4c1f-becb-67fc1b96681b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:58:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:58:19 compute-0 nova_compute[348325]: 2025-12-03 18:58:19.451 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:19 compute-0 nova_compute[348325]: 2025-12-03 18:58:19.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:58:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 183 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 134 op/s
Dec  3 18:58:19 compute-0 podman[444557]: 2025-12-03 18:58:19.95605094 +0000 UTC m=+0.075934245 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  3 18:58:19 compute-0 podman[444551]: 2025-12-03 18:58:19.98758001 +0000 UTC m=+0.129219176 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 18:58:19 compute-0 podman[444550]: 2025-12-03 18:58:19.987693122 +0000 UTC m=+0.139368853 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, name=ubi9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release-0.7.12=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4)
Dec  3 18:58:20 compute-0 nova_compute[348325]: 2025-12-03 18:58:20.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:58:20 compute-0 nova_compute[348325]: 2025-12-03 18:58:20.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:58:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 183 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 162 op/s
Dec  3 18:58:22 compute-0 nova_compute[348325]: 2025-12-03 18:58:22.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:58:22 compute-0 nova_compute[348325]: 2025-12-03 18:58:22.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:58:22 compute-0 nova_compute[348325]: 2025-12-03 18:58:22.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 18:58:22 compute-0 nova_compute[348325]: 2025-12-03 18:58:22.834 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:58:22 compute-0 nova_compute[348325]: 2025-12-03 18:58:22.835 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:58:22 compute-0 nova_compute[348325]: 2025-12-03 18:58:22.835 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:58:22 compute-0 nova_compute[348325]: 2025-12-03 18:58:22.835 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid eff2304f-0e67-4c93-ae65-20d4ddb87625 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:58:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:23.355 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:23.356 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:23.357 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:23 compute-0 nova_compute[348325]: 2025-12-03 18:58:23.379 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 183 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 141 op/s
Dec  3 18:58:23 compute-0 nova_compute[348325]: 2025-12-03 18:58:23.895 348329 DEBUG nova.network.neutron [req-00a60ed6-91c5-4b23-bec6-a155ba2f492b req-29f2cf07-0841-4f8e-b39f-91574fe6c250 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Updated VIF entry in instance network info cache for port 2c007b4e-e674-4c1f-becb-67fc1b96681b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:58:23 compute-0 nova_compute[348325]: 2025-12-03 18:58:23.896 348329 DEBUG nova.network.neutron [req-00a60ed6-91c5-4b23-bec6-a155ba2f492b req-29f2cf07-0841-4f8e-b39f-91574fe6c250 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Updating instance_info_cache with network_info: [{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:58:23 compute-0 nova_compute[348325]: 2025-12-03 18:58:23.942 348329 DEBUG oslo_concurrency.lockutils [req-00a60ed6-91c5-4b23-bec6-a155ba2f492b req-29f2cf07-0841-4f8e-b39f-91574fe6c250 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:58:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:58:24 compute-0 nova_compute[348325]: 2025-12-03 18:58:24.454 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011079409023572312 of space, bias 1.0, pg target 0.33238227070716936 quantized to 32 (current 32)
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:58:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:58:24 compute-0 nova_compute[348325]: 2025-12-03 18:58:24.936 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Updating instance_info_cache with network_info: [{"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:58:24 compute-0 nova_compute[348325]: 2025-12-03 18:58:24.966 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:58:24 compute-0 nova_compute[348325]: 2025-12-03 18:58:24.967 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:58:24 compute-0 nova_compute[348325]: 2025-12-03 18:58:24.969 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:58:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 183 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 125 op/s
Dec  3 18:58:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:26.905 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:58:26 compute-0 nova_compute[348325]: 2025-12-03 18:58:26.906 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:26 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:26.908 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:58:27 compute-0 nova_compute[348325]: 2025-12-03 18:58:27.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:58:27 compute-0 nova_compute[348325]: 2025-12-03 18:58:27.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:58:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 183 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 509 KiB/s wr, 81 op/s
Dec  3 18:58:28 compute-0 nova_compute[348325]: 2025-12-03 18:58:28.382 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:58:29 compute-0 nova_compute[348325]: 2025-12-03 18:58:29.457 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 183 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 22 KiB/s wr, 68 op/s
Dec  3 18:58:29 compute-0 podman[158200]: time="2025-12-03T18:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:58:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45044 "" "Go-http-client/1.1"
Dec  3 18:58:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9106 "" "Go-http-client/1.1"
Dec  3 18:58:31 compute-0 nova_compute[348325]: 2025-12-03 18:58:31.135 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Acquiring lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:31 compute-0 nova_compute[348325]: 2025-12-03 18:58:31.136 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:31 compute-0 nova_compute[348325]: 2025-12-03 18:58:31.159 348329 DEBUG nova.compute.manager [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:58:31 compute-0 nova_compute[348325]: 2025-12-03 18:58:31.253 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:31 compute-0 nova_compute[348325]: 2025-12-03 18:58:31.255 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:31 compute-0 nova_compute[348325]: 2025-12-03 18:58:31.272 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:58:31 compute-0 nova_compute[348325]: 2025-12-03 18:58:31.273 348329 INFO nova.compute.claims [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:58:31 compute-0 openstack_network_exporter[365222]: ERROR   18:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:58:31 compute-0 openstack_network_exporter[365222]: ERROR   18:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:58:31 compute-0 openstack_network_exporter[365222]: ERROR   18:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:58:31 compute-0 openstack_network_exporter[365222]: ERROR   18:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:58:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:58:31 compute-0 openstack_network_exporter[365222]: ERROR   18:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:58:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:58:31 compute-0 nova_compute[348325]: 2025-12-03 18:58:31.479 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:58:31 compute-0 nova_compute[348325]: 2025-12-03 18:58:31.611 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 183 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 14 KiB/s wr, 43 op/s
Dec  3 18:58:31 compute-0 podman[444625]: 2025-12-03 18:58:31.945792141 +0000 UTC m=+0.107068465 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:58:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:58:32 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769296048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.136 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.146 348329 DEBUG nova.compute.provider_tree [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.173 348329 DEBUG nova.scheduler.client.report [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.204 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.205 348329 DEBUG nova.compute.manager [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.286 348329 DEBUG nova.compute.manager [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.287 348329 DEBUG nova.network.neutron [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.311 348329 INFO nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.363 348329 DEBUG nova.compute.manager [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.459 348329 DEBUG nova.compute.manager [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.461 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.462 348329 INFO nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Creating image(s)#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.501 348329 DEBUG nova.storage.rbd_utils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] rbd image 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.558 348329 DEBUG nova.storage.rbd_utils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] rbd image 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.593 348329 DEBUG nova.storage.rbd_utils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] rbd image 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.599 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.618 348329 DEBUG nova.policy [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd3387836400c4ffa96fc7c863361df79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0e342f56e114484b986071d1dfb8656a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.622 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.649 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.650 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.651 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.651 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.652 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.673 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.675 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Acquiring lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.676 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.677 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.703 348329 DEBUG nova.storage.rbd_utils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] rbd image 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:32 compute-0 nova_compute[348325]: 2025-12-03 18:58:32.710 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.069 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:58:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/518405354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.171 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.181 348329 DEBUG nova.storage.rbd_utils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] resizing rbd image 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.408 348329 DEBUG nova.objects.instance [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lazy-loading 'migration_context' on Instance uuid 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.411 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.432 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.432 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Ensure instance console log exists: /var/lib/nova/instances/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.433 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.433 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.433 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.462 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.463 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.468 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.469 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:58:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 183 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 4.6 KiB/s wr, 3 op/s
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.930 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.933 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3646MB free_disk=59.92177200317383GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.933 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:33 compute-0 nova_compute[348325]: 2025-12-03 18:58:33.934 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:33 compute-0 podman[444842]: 2025-12-03 18:58:33.937374795 +0000 UTC m=+0.092275724 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 18:58:33 compute-0 podman[444841]: 2025-12-03 18:58:33.965322508 +0000 UTC m=+0.132314721 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.048 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance eff2304f-0e67-4c93-ae65-20d4ddb87625 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.049 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance c9937213-8842-4393-90b0-edb363037633 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.049 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.049 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.055 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.168 348329 DEBUG nova.network.neutron [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Successfully created port: 53ab68f2-6888-4d96-9480-47e55e38f422 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.172 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.460 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:58:34 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1696428230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.630 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.637 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.651 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.673 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:58:34 compute-0 nova_compute[348325]: 2025-12-03 18:58:34.674 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:35 compute-0 nova_compute[348325]: 2025-12-03 18:58:35.384 348329 DEBUG nova.network.neutron [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Successfully updated port: 53ab68f2-6888-4d96-9480-47e55e38f422 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 18:58:35 compute-0 nova_compute[348325]: 2025-12-03 18:58:35.403 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Acquiring lock "refresh_cache-4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:58:35 compute-0 nova_compute[348325]: 2025-12-03 18:58:35.404 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Acquired lock "refresh_cache-4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:58:35 compute-0 nova_compute[348325]: 2025-12-03 18:58:35.404 348329 DEBUG nova.network.neutron [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:58:35 compute-0 nova_compute[348325]: 2025-12-03 18:58:35.551 348329 DEBUG nova.compute.manager [req-577335e8-01b4-412d-9c4d-fe0b2bf5cbfc req-e8bbc6ea-b875-4378-950f-d0f0b17a43b7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Received event network-changed-53ab68f2-6888-4d96-9480-47e55e38f422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:58:35 compute-0 nova_compute[348325]: 2025-12-03 18:58:35.552 348329 DEBUG nova.compute.manager [req-577335e8-01b4-412d-9c4d-fe0b2bf5cbfc req-e8bbc6ea-b875-4378-950f-d0f0b17a43b7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Refreshing instance network info cache due to event network-changed-53ab68f2-6888-4d96-9480-47e55e38f422. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:58:35 compute-0 nova_compute[348325]: 2025-12-03 18:58:35.553 348329 DEBUG oslo_concurrency.lockutils [req-577335e8-01b4-412d-9c4d-fe0b2bf5cbfc req-e8bbc6ea-b875-4378-950f-d0f0b17a43b7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:58:35 compute-0 nova_compute[348325]: 2025-12-03 18:58:35.724 348329 DEBUG nova.network.neutron [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:58:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 209 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.0 MiB/s wr, 4 op/s
Dec  3 18:58:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:35.912 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:58:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/60585853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:58:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:58:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/60585853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:58:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 213 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.2 MiB/s wr, 6 op/s
Dec  3 18:58:37 compute-0 nova_compute[348325]: 2025-12-03 18:58:37.946 348329 DEBUG nova.network.neutron [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Updating instance_info_cache with network_info: [{"id": "53ab68f2-6888-4d96-9480-47e55e38f422", "address": "fa:16:3e:a6:0c:ea", "network": {"id": "dbd0831a-c570-4257-bca6-ab48802d60d7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-940353231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e342f56e114484b986071d1dfb8656a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53ab68f2-68", "ovs_interfaceid": "53ab68f2-6888-4d96-9480-47e55e38f422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:58:37 compute-0 nova_compute[348325]: 2025-12-03 18:58:37.973 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Releasing lock "refresh_cache-4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:58:37 compute-0 nova_compute[348325]: 2025-12-03 18:58:37.973 348329 DEBUG nova.compute.manager [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Instance network_info: |[{"id": "53ab68f2-6888-4d96-9480-47e55e38f422", "address": "fa:16:3e:a6:0c:ea", "network": {"id": "dbd0831a-c570-4257-bca6-ab48802d60d7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-940353231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e342f56e114484b986071d1dfb8656a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53ab68f2-68", "ovs_interfaceid": "53ab68f2-6888-4d96-9480-47e55e38f422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 18:58:37 compute-0 nova_compute[348325]: 2025-12-03 18:58:37.975 348329 DEBUG oslo_concurrency.lockutils [req-577335e8-01b4-412d-9c4d-fe0b2bf5cbfc req-e8bbc6ea-b875-4378-950f-d0f0b17a43b7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:58:37 compute-0 nova_compute[348325]: 2025-12-03 18:58:37.975 348329 DEBUG nova.network.neutron [req-577335e8-01b4-412d-9c4d-fe0b2bf5cbfc req-e8bbc6ea-b875-4378-950f-d0f0b17a43b7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Refreshing network info cache for port 53ab68f2-6888-4d96-9480-47e55e38f422 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:58:37 compute-0 nova_compute[348325]: 2025-12-03 18:58:37.978 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Start _get_guest_xml network_info=[{"id": "53ab68f2-6888-4d96-9480-47e55e38f422", "address": "fa:16:3e:a6:0c:ea", "network": {"id": "dbd0831a-c570-4257-bca6-ab48802d60d7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-940353231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e342f56e114484b986071d1dfb8656a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53ab68f2-68", "ovs_interfaceid": "53ab68f2-6888-4d96-9480-47e55e38f422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '55982930-937b-484e-96ee-69e406a48023'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:58:37 compute-0 nova_compute[348325]: 2025-12-03 18:58:37.985 348329 WARNING nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:58:37 compute-0 nova_compute[348325]: 2025-12-03 18:58:37.990 348329 DEBUG nova.virt.libvirt.host [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:58:37 compute-0 nova_compute[348325]: 2025-12-03 18:58:37.991 348329 DEBUG nova.virt.libvirt.host [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:58:37 compute-0 nova_compute[348325]: 2025-12-03 18:58:37.999 348329 DEBUG nova.virt.libvirt.host [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:58:37 compute-0 nova_compute[348325]: 2025-12-03 18:58:37.999 348329 DEBUG nova.virt.libvirt.host [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.000 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.000 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:56:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a94cfbfb-a20a-4689-ac91-e7436db75880',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.001 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.001 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.001 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.002 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.002 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.003 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.003 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.003 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.004 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.004 348329 DEBUG nova.virt.hardware [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.008 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.397 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:58:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1654324152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.457 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.486 348329 DEBUG nova.storage.rbd_utils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] rbd image 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.494 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:58:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790931980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.954 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.956 348329 DEBUG nova.virt.libvirt.vif [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:58:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-2083585917',display_name='tempest-TestServerBasicOps-server-2083585917',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-2083585917',id=11,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFn6diZTg4q+Q2Qfd2HIztiCzt/4kYZPM7VsCMM6f37GRrqJAsGMHRV/wUzcEB54jMt3wOBRWvDnE75JUGheP+1nPMbymNECzCUBvV7xqhypdn3A4RREInS7UiMpzgGxxA==',key_name='tempest-TestServerBasicOps-2099261388',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e342f56e114484b986071d1dfb8656a',ramdisk_id='',reservation_id='r-ty0uxce7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1171439068',owner_user_name='tempest-TestServerBasicOps-1171439068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:58:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3387836400c4ffa96fc7c863361df79',uuid=4e045c2f-f0fd-4171-b724-3e38bd7ec4eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53ab68f2-6888-4d96-9480-47e55e38f422", "address": "fa:16:3e:a6:0c:ea", "network": {"id": "dbd0831a-c570-4257-bca6-ab48802d60d7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-940353231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e342f56e114484b986071d1dfb8656a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53ab68f2-68", "ovs_interfaceid": "53ab68f2-6888-4d96-9480-47e55e38f422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.957 348329 DEBUG nova.network.os_vif_util [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Converting VIF {"id": "53ab68f2-6888-4d96-9480-47e55e38f422", "address": "fa:16:3e:a6:0c:ea", "network": {"id": "dbd0831a-c570-4257-bca6-ab48802d60d7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-940353231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e342f56e114484b986071d1dfb8656a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53ab68f2-68", "ovs_interfaceid": "53ab68f2-6888-4d96-9480-47e55e38f422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.958 348329 DEBUG nova.network.os_vif_util [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:0c:ea,bridge_name='br-int',has_traffic_filtering=True,id=53ab68f2-6888-4d96-9480-47e55e38f422,network=Network(dbd0831a-c570-4257-bca6-ab48802d60d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53ab68f2-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.960 348329 DEBUG nova.objects.instance [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.978 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:58:38 compute-0 nova_compute[348325]:  <uuid>4e045c2f-f0fd-4171-b724-3e38bd7ec4eb</uuid>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  <name>instance-0000000b</name>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  <memory>131072</memory>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <nova:name>tempest-TestServerBasicOps-server-2083585917</nova:name>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:58:37</nova:creationTime>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <nova:flavor name="m1.nano">
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <nova:memory>128</nova:memory>
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <nova:user uuid="d3387836400c4ffa96fc7c863361df79">tempest-TestServerBasicOps-1171439068-project-member</nova:user>
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <nova:project uuid="0e342f56e114484b986071d1dfb8656a">tempest-TestServerBasicOps-1171439068</nova:project>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="55982930-937b-484e-96ee-69e406a48023"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <nova:port uuid="53ab68f2-6888-4d96-9480-47e55e38f422">
Dec  3 18:58:38 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <system>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <entry name="serial">4e045c2f-f0fd-4171-b724-3e38bd7ec4eb</entry>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <entry name="uuid">4e045c2f-f0fd-4171-b724-3e38bd7ec4eb</entry>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    </system>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  <os>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  </os>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  <features>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  </features>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk">
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      </source>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk.config">
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      </source>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:58:38 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:a6:0c:ea"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <target dev="tap53ab68f2-68"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/console.log" append="off"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <video>
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    </video>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:58:38 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:58:38 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:58:38 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:58:38 compute-0 nova_compute[348325]: </domain>
Dec  3 18:58:38 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.980 348329 DEBUG nova.compute.manager [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Preparing to wait for external event network-vif-plugged-53ab68f2-6888-4d96-9480-47e55e38f422 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.980 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Acquiring lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.981 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.981 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.982 348329 DEBUG nova.virt.libvirt.vif [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:58:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-2083585917',display_name='tempest-TestServerBasicOps-server-2083585917',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-2083585917',id=11,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFn6diZTg4q+Q2Qfd2HIztiCzt/4kYZPM7VsCMM6f37GRrqJAsGMHRV/wUzcEB54jMt3wOBRWvDnE75JUGheP+1nPMbymNECzCUBvV7xqhypdn3A4RREInS7UiMpzgGxxA==',key_name='tempest-TestServerBasicOps-2099261388',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e342f56e114484b986071d1dfb8656a',ramdisk_id='',reservation_id='r-ty0uxce7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1171439068',owner_user_name='tempest-TestServerBasicOps-1171439068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:58:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3387836400c4ffa96fc7c863361df79',uuid=4e045c2f-f0fd-4171-b724-3e38bd7ec4eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53ab68f2-6888-4d96-9480-47e55e38f422", "address": "fa:16:3e:a6:0c:ea", "network": {"id": "dbd0831a-c570-4257-bca6-ab48802d60d7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-940353231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e342f56e114484b986071d1dfb8656a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53ab68f2-68", "ovs_interfaceid": "53ab68f2-6888-4d96-9480-47e55e38f422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.983 348329 DEBUG nova.network.os_vif_util [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Converting VIF {"id": "53ab68f2-6888-4d96-9480-47e55e38f422", "address": "fa:16:3e:a6:0c:ea", "network": {"id": "dbd0831a-c570-4257-bca6-ab48802d60d7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-940353231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e342f56e114484b986071d1dfb8656a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53ab68f2-68", "ovs_interfaceid": "53ab68f2-6888-4d96-9480-47e55e38f422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.984 348329 DEBUG nova.network.os_vif_util [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:0c:ea,bridge_name='br-int',has_traffic_filtering=True,id=53ab68f2-6888-4d96-9480-47e55e38f422,network=Network(dbd0831a-c570-4257-bca6-ab48802d60d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53ab68f2-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.984 348329 DEBUG os_vif [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:0c:ea,bridge_name='br-int',has_traffic_filtering=True,id=53ab68f2-6888-4d96-9480-47e55e38f422,network=Network(dbd0831a-c570-4257-bca6-ab48802d60d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53ab68f2-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.985 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.986 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:38 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.987 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:38.999 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:39.000 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53ab68f2-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:39.001 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap53ab68f2-68, col_values=(('external_ids', {'iface-id': '53ab68f2-6888-4d96-9480-47e55e38f422', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:0c:ea', 'vm-uuid': '4e045c2f-f0fd-4171-b724-3e38bd7ec4eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:39.003 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:39 compute-0 NetworkManager[49087]: <info>  [1764788319.0039] manager: (tap53ab68f2-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:39.008 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:39.014 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:39.015 348329 INFO os_vif [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:0c:ea,bridge_name='br-int',has_traffic_filtering=True,id=53ab68f2-6888-4d96-9480-47e55e38f422,network=Network(dbd0831a-c570-4257-bca6-ab48802d60d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53ab68f2-68')#033[00m
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:39.085 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:39.085 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:39.085 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] No VIF found with MAC fa:16:3e:a6:0c:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:39.086 348329 INFO nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Using config drive#033[00m
Dec  3 18:58:39 compute-0 nova_compute[348325]: 2025-12-03 18:58:39.124 348329 DEBUG nova.storage.rbd_utils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] rbd image 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:58:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  3 18:58:40 compute-0 nova_compute[348325]: 2025-12-03 18:58:40.373 348329 INFO nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Creating config drive at /var/lib/nova/instances/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.config#033[00m
Dec  3 18:58:40 compute-0 nova_compute[348325]: 2025-12-03 18:58:40.380 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_351z8e2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:40 compute-0 nova_compute[348325]: 2025-12-03 18:58:40.507 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_351z8e2" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:40 compute-0 nova_compute[348325]: 2025-12-03 18:58:40.544 348329 DEBUG nova.storage.rbd_utils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] rbd image 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:58:40 compute-0 nova_compute[348325]: 2025-12-03 18:58:40.551 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.config 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:40 compute-0 nova_compute[348325]: 2025-12-03 18:58:40.759 348329 DEBUG oslo_concurrency.processutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.config 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:40 compute-0 nova_compute[348325]: 2025-12-03 18:58:40.760 348329 INFO nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Deleting local config drive /var/lib/nova/instances/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.config because it was imported into RBD.#033[00m
Dec  3 18:58:40 compute-0 kernel: tap53ab68f2-68: entered promiscuous mode
Dec  3 18:58:40 compute-0 NetworkManager[49087]: <info>  [1764788320.8188] manager: (tap53ab68f2-68): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Dec  3 18:58:40 compute-0 ovn_controller[89305]: 2025-12-03T18:58:40Z|00114|binding|INFO|Claiming lport 53ab68f2-6888-4d96-9480-47e55e38f422 for this chassis.
Dec  3 18:58:40 compute-0 ovn_controller[89305]: 2025-12-03T18:58:40Z|00115|binding|INFO|53ab68f2-6888-4d96-9480-47e55e38f422: Claiming fa:16:3e:a6:0c:ea 10.100.0.3
Dec  3 18:58:40 compute-0 nova_compute[348325]: 2025-12-03 18:58:40.820 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.833 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:0c:ea 10.100.0.3'], port_security=['fa:16:3e:a6:0c:ea 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4e045c2f-f0fd-4171-b724-3e38bd7ec4eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dbd0831a-c570-4257-bca6-ab48802d60d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e342f56e114484b986071d1dfb8656a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0660123c-0df0-4256-8ef5-c6d73369a9fb c9884ab7-1707-4498-81ec-fcd45f0f391c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3c144a3-104d-4043-bf40-c75e02dd90b0, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=53ab68f2-6888-4d96-9480-47e55e38f422) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.835 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 53ab68f2-6888-4d96-9480-47e55e38f422 in datapath dbd0831a-c570-4257-bca6-ab48802d60d7 bound to our chassis#033[00m
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.837 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dbd0831a-c570-4257-bca6-ab48802d60d7#033[00m
Dec  3 18:58:40 compute-0 ovn_controller[89305]: 2025-12-03T18:58:40Z|00116|binding|INFO|Setting lport 53ab68f2-6888-4d96-9480-47e55e38f422 ovn-installed in OVS
Dec  3 18:58:40 compute-0 ovn_controller[89305]: 2025-12-03T18:58:40Z|00117|binding|INFO|Setting lport 53ab68f2-6888-4d96-9480-47e55e38f422 up in Southbound
Dec  3 18:58:40 compute-0 nova_compute[348325]: 2025-12-03 18:58:40.853 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.852 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6a8f60-fc2d-4e6a-a944-f747f5f72a16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.854 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdbd0831a-c1 in ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.856 411759 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdbd0831a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.856 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[829000d9-9871-46a2-8611-52c20b6b0a1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.858 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b1dced-a151-420c-b855-f5e4f14c6b55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:40 compute-0 systemd-udevd[445048]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.877 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[638911a8-2b57-4591-97a6-e3340d2784a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:40 compute-0 systemd-machined[138702]: New machine qemu-11-instance-0000000b.
Dec  3 18:58:40 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Dec  3 18:58:40 compute-0 NetworkManager[49087]: <info>  [1764788320.8928] device (tap53ab68f2-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:58:40 compute-0 NetworkManager[49087]: <info>  [1764788320.8974] device (tap53ab68f2-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.899 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[4b820231-8354-4fcf-989d-5cd741f290b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.937 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[be685ee6-35bf-4844-9704-e2675d23a69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.946 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[54b652ea-e51e-42b1-8260-05e60505ed76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:40 compute-0 NetworkManager[49087]: <info>  [1764788320.9512] manager: (tapdbd0831a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.972 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[70714aec-0bf8-44e4-b8dc-7873b7031508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:40 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:40.980 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[06f1dc59-499f-4e86-a4f7-33ed4535be2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:41 compute-0 NetworkManager[49087]: <info>  [1764788321.0022] device (tapdbd0831a-c0): carrier: link connected
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.006 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0b3fa9-0906-4bb3-9159-cd2c7a73a6c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.022 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[d318f801-33c2-4dbe-a00c-43211abcfbe7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdbd0831a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:d6:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660094, 'reachable_time': 44365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445080, 'error': None, 'target': 'ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.038 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e445441a-f370-4a0b-abb9-165b4bb137ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:d6b0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 660094, 'tstamp': 660094}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 445081, 'error': None, 'target': 'ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.056 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[73be7b18-1199-4c40-a0d9-17cc7a418f58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdbd0831a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:d6:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660094, 'reachable_time': 44365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 445082, 'error': None, 'target': 'ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.087 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[efdf82da-e9e8-442c-b7d8-d9b38d39260f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.150 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1343b5-10da-440a-88c1-ed7c9b8bf92c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.151 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdbd0831a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.152 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.152 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdbd0831a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:41 compute-0 kernel: tapdbd0831a-c0: entered promiscuous mode
Dec  3 18:58:41 compute-0 NetworkManager[49087]: <info>  [1764788321.1553] manager: (tapdbd0831a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.157 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.158 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdbd0831a-c0, col_values=(('external_ids', {'iface-id': 'a490a544-649c-430c-bdd4-7e78ebd7f7b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:41 compute-0 ovn_controller[89305]: 2025-12-03T18:58:41Z|00118|binding|INFO|Releasing lport a490a544-649c-430c-bdd4-7e78ebd7f7b9 from this chassis (sb_readonly=0)
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.173 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.173 286999 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dbd0831a-c570-4257-bca6-ab48802d60d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dbd0831a-c570-4257-bca6-ab48802d60d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.174 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[d9719695-4acd-422e-9c0d-cc268ed2b122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.175 286999 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: global
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    log         /dev/log local0 debug
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    log-tag     haproxy-metadata-proxy-dbd0831a-c570-4257-bca6-ab48802d60d7
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    user        root
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    group       root
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    maxconn     1024
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    pidfile     /var/lib/neutron/external/pids/dbd0831a-c570-4257-bca6-ab48802d60d7.pid.haproxy
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    daemon
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: defaults
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    log global
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    mode http
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    option httplog
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    option dontlognull
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    option http-server-close
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    option forwardfor
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    retries                 3
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    timeout http-request    30s
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    timeout connect         30s
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    timeout client          32s
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    timeout server          32s
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    timeout http-keep-alive 30s
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: listen listener
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    bind 169.254.169.254:80
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]:    http-request add-header X-OVN-Network-ID dbd0831a-c570-4257-bca6-ab48802d60d7
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 18:58:41 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:41.176 286999 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7', 'env', 'PROCESS_TAG=haproxy-dbd0831a-c570-4257-bca6-ab48802d60d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dbd0831a-c570-4257-bca6-ab48802d60d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.282 348329 DEBUG nova.network.neutron [req-577335e8-01b4-412d-9c4d-fe0b2bf5cbfc req-e8bbc6ea-b875-4378-950f-d0f0b17a43b7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Updated VIF entry in instance network info cache for port 53ab68f2-6888-4d96-9480-47e55e38f422. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.282 348329 DEBUG nova.network.neutron [req-577335e8-01b4-412d-9c4d-fe0b2bf5cbfc req-e8bbc6ea-b875-4378-950f-d0f0b17a43b7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Updating instance_info_cache with network_info: [{"id": "53ab68f2-6888-4d96-9480-47e55e38f422", "address": "fa:16:3e:a6:0c:ea", "network": {"id": "dbd0831a-c570-4257-bca6-ab48802d60d7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-940353231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e342f56e114484b986071d1dfb8656a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53ab68f2-68", "ovs_interfaceid": "53ab68f2-6888-4d96-9480-47e55e38f422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.303 348329 DEBUG oslo_concurrency.lockutils [req-577335e8-01b4-412d-9c4d-fe0b2bf5cbfc req-e8bbc6ea-b875-4378-950f-d0f0b17a43b7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.486 348329 DEBUG nova.compute.manager [req-731db4c6-af78-4b95-8985-85c63a263ff4 req-925e77d8-a620-4bb2-8917-388903375277 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Received event network-vif-plugged-53ab68f2-6888-4d96-9480-47e55e38f422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.486 348329 DEBUG oslo_concurrency.lockutils [req-731db4c6-af78-4b95-8985-85c63a263ff4 req-925e77d8-a620-4bb2-8917-388903375277 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.487 348329 DEBUG oslo_concurrency.lockutils [req-731db4c6-af78-4b95-8985-85c63a263ff4 req-925e77d8-a620-4bb2-8917-388903375277 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.487 348329 DEBUG oslo_concurrency.lockutils [req-731db4c6-af78-4b95-8985-85c63a263ff4 req-925e77d8-a620-4bb2-8917-388903375277 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.487 348329 DEBUG nova.compute.manager [req-731db4c6-af78-4b95-8985-85c63a263ff4 req-925e77d8-a620-4bb2-8917-388903375277 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Processing event network-vif-plugged-53ab68f2-6888-4d96-9480-47e55e38f422 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 18:58:41 compute-0 podman[445132]: 2025-12-03 18:58:41.579776565 +0000 UTC m=+0.043016530 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:58:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  3 18:58:41 compute-0 podman[445132]: 2025-12-03 18:58:41.743433601 +0000 UTC m=+0.206673546 container create 1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 18:58:41 compute-0 systemd[1]: Started libpod-conmon-1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1.scope.
Dec  3 18:58:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3e9031c37ad1277ab1ead266fc1de47dd8cd034942aacec420b2dbb5016d77/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 18:58:41 compute-0 podman[445132]: 2025-12-03 18:58:41.845989446 +0000 UTC m=+0.309229401 container init 1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  3 18:58:41 compute-0 podman[445132]: 2025-12-03 18:58:41.854092273 +0000 UTC m=+0.317332218 container start 1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:58:41 compute-0 neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7[445169]: [NOTICE]   (445174) : New worker (445177) forked
Dec  3 18:58:41 compute-0 neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7[445169]: [NOTICE]   (445174) : Loading success.
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.903 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788321.9033098, 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.904 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] VM Started (Lifecycle Event)#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.906 348329 DEBUG nova.compute.manager [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.910 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.915 348329 INFO nova.virt.libvirt.driver [-] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Instance spawned successfully.#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.915 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.939 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.950 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.955 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.956 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.956 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.956 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.957 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.957 348329 DEBUG nova.virt.libvirt.driver [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.992 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.993 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788321.9063704, 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:58:41 compute-0 nova_compute[348325]: 2025-12-03 18:58:41.993 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] VM Paused (Lifecycle Event)#033[00m
Dec  3 18:58:42 compute-0 nova_compute[348325]: 2025-12-03 18:58:42.022 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:42 compute-0 nova_compute[348325]: 2025-12-03 18:58:42.026 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788321.9092894, 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:58:42 compute-0 nova_compute[348325]: 2025-12-03 18:58:42.026 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:58:42 compute-0 nova_compute[348325]: 2025-12-03 18:58:42.054 348329 INFO nova.compute.manager [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Took 9.59 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:58:42 compute-0 nova_compute[348325]: 2025-12-03 18:58:42.055 348329 DEBUG nova.compute.manager [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:42 compute-0 nova_compute[348325]: 2025-12-03 18:58:42.056 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:42 compute-0 nova_compute[348325]: 2025-12-03 18:58:42.064 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:58:42 compute-0 nova_compute[348325]: 2025-12-03 18:58:42.097 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:58:42 compute-0 nova_compute[348325]: 2025-12-03 18:58:42.140 348329 INFO nova.compute.manager [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Took 10.92 seconds to build instance.#033[00m
Dec  3 18:58:42 compute-0 nova_compute[348325]: 2025-12-03 18:58:42.155 348329 DEBUG oslo_concurrency.lockutils [None req-fd0a885b-7f6a-4272-b92d-f3ed15d66066 d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:43 compute-0 nova_compute[348325]: 2025-12-03 18:58:43.399 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:43 compute-0 nova_compute[348325]: 2025-12-03 18:58:43.583 348329 DEBUG nova.compute.manager [req-31088448-3ccb-42e7-8a80-d405fc9ab902 req-d7664f6d-4a5a-4fcc-9de4-8d6a679996ab 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Received event network-vif-plugged-53ab68f2-6888-4d96-9480-47e55e38f422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:58:43 compute-0 nova_compute[348325]: 2025-12-03 18:58:43.584 348329 DEBUG oslo_concurrency.lockutils [req-31088448-3ccb-42e7-8a80-d405fc9ab902 req-d7664f6d-4a5a-4fcc-9de4-8d6a679996ab 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:43 compute-0 nova_compute[348325]: 2025-12-03 18:58:43.584 348329 DEBUG oslo_concurrency.lockutils [req-31088448-3ccb-42e7-8a80-d405fc9ab902 req-d7664f6d-4a5a-4fcc-9de4-8d6a679996ab 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:43 compute-0 nova_compute[348325]: 2025-12-03 18:58:43.585 348329 DEBUG oslo_concurrency.lockutils [req-31088448-3ccb-42e7-8a80-d405fc9ab902 req-d7664f6d-4a5a-4fcc-9de4-8d6a679996ab 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:43 compute-0 nova_compute[348325]: 2025-12-03 18:58:43.585 348329 DEBUG nova.compute.manager [req-31088448-3ccb-42e7-8a80-d405fc9ab902 req-d7664f6d-4a5a-4fcc-9de4-8d6a679996ab 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] No waiting events found dispatching network-vif-plugged-53ab68f2-6888-4d96-9480-47e55e38f422 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:58:43 compute-0 nova_compute[348325]: 2025-12-03 18:58:43.585 348329 WARNING nova.compute.manager [req-31088448-3ccb-42e7-8a80-d405fc9ab902 req-d7664f6d-4a5a-4fcc-9de4-8d6a679996ab 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Received unexpected event network-vif-plugged-53ab68f2-6888-4d96-9480-47e55e38f422 for instance with vm_state active and task_state None.#033[00m
Dec  3 18:58:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec  3 18:58:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:58:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:58:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:58:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:58:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:58:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:58:44 compute-0 nova_compute[348325]: 2025-12-03 18:58:44.004 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:58:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 772 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec  3 18:58:46 compute-0 nova_compute[348325]: 2025-12-03 18:58:46.312 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:46 compute-0 podman[445187]: 2025-12-03 18:58:46.938794557 +0000 UTC m=+0.100389462 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 18:58:46 compute-0 podman[445186]: 2025-12-03 18:58:46.945692445 +0000 UTC m=+0.113570153 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:58:46 compute-0 podman[445188]: 2025-12-03 18:58:46.962840424 +0000 UTC m=+0.121559689 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Dec  3 18:58:47 compute-0 nova_compute[348325]: 2025-12-03 18:58:47.167 348329 DEBUG nova.compute.manager [req-81e451b3-89b8-433a-8a80-ec35f4eb3a15 req-bac87eea-e23d-4745-becb-110894e25821 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Received event network-changed-53ab68f2-6888-4d96-9480-47e55e38f422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:58:47 compute-0 nova_compute[348325]: 2025-12-03 18:58:47.168 348329 DEBUG nova.compute.manager [req-81e451b3-89b8-433a-8a80-ec35f4eb3a15 req-bac87eea-e23d-4745-becb-110894e25821 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Refreshing instance network info cache due to event network-changed-53ab68f2-6888-4d96-9480-47e55e38f422. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:58:47 compute-0 nova_compute[348325]: 2025-12-03 18:58:47.168 348329 DEBUG oslo_concurrency.lockutils [req-81e451b3-89b8-433a-8a80-ec35f4eb3a15 req-bac87eea-e23d-4745-becb-110894e25821 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:58:47 compute-0 nova_compute[348325]: 2025-12-03 18:58:47.168 348329 DEBUG oslo_concurrency.lockutils [req-81e451b3-89b8-433a-8a80-ec35f4eb3a15 req-bac87eea-e23d-4745-becb-110894e25821 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:58:47 compute-0 nova_compute[348325]: 2025-12-03 18:58:47.168 348329 DEBUG nova.network.neutron [req-81e451b3-89b8-433a-8a80-ec35f4eb3a15 req-bac87eea-e23d-4745-becb-110894e25821 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Refreshing network info cache for port 53ab68f2-6888-4d96-9480-47e55e38f422 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:58:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 925 KiB/s rd, 790 KiB/s wr, 62 op/s
Dec  3 18:58:47 compute-0 nova_compute[348325]: 2025-12-03 18:58:47.889 348329 DEBUG oslo_concurrency.lockutils [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:47 compute-0 nova_compute[348325]: 2025-12-03 18:58:47.890 348329 DEBUG oslo_concurrency.lockutils [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:47 compute-0 nova_compute[348325]: 2025-12-03 18:58:47.890 348329 INFO nova.compute.manager [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Rebooting instance#033[00m
Dec  3 18:58:47 compute-0 nova_compute[348325]: 2025-12-03 18:58:47.906 348329 DEBUG oslo_concurrency.lockutils [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquiring lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:58:47 compute-0 nova_compute[348325]: 2025-12-03 18:58:47.907 348329 DEBUG oslo_concurrency.lockutils [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquired lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:58:47 compute-0 nova_compute[348325]: 2025-12-03 18:58:47.907 348329 DEBUG nova.network.neutron [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.342509) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788328342785, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2080, "num_deletes": 251, "total_data_size": 3330746, "memory_usage": 3387160, "flush_reason": "Manual Compaction"}
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788328361923, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3264015, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34783, "largest_seqno": 36862, "table_properties": {"data_size": 3254520, "index_size": 5989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19349, "raw_average_key_size": 20, "raw_value_size": 3235497, "raw_average_value_size": 3387, "num_data_blocks": 265, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764788114, "oldest_key_time": 1764788114, "file_creation_time": 1764788328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 19261 microseconds, and 6888 cpu microseconds.
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.361983) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3264015 bytes OK
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.362000) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.364514) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.364529) EVENT_LOG_v1 {"time_micros": 1764788328364524, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.364546) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3322019, prev total WAL file size 3322019, number of live WAL files 2.
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.365715) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3187KB)], [80(7044KB)]
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788328365769, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10477741, "oldest_snapshot_seqno": -1}
Dec  3 18:58:48 compute-0 nova_compute[348325]: 2025-12-03 18:58:48.401 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 5716 keys, 8746392 bytes, temperature: kUnknown
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788328427525, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8746392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8708530, "index_size": 22456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 144351, "raw_average_key_size": 25, "raw_value_size": 8605626, "raw_average_value_size": 1505, "num_data_blocks": 921, "num_entries": 5716, "num_filter_entries": 5716, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764788328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.427687) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8746392 bytes
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.429305) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.5 rd, 141.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 6.9 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6234, records dropped: 518 output_compression: NoCompression
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.429319) EVENT_LOG_v1 {"time_micros": 1764788328429312, "job": 46, "event": "compaction_finished", "compaction_time_micros": 61803, "compaction_time_cpu_micros": 25045, "output_level": 6, "num_output_files": 1, "total_output_size": 8746392, "num_input_records": 6234, "num_output_records": 5716, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788328430047, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788328431138, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.365593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.431335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.431340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.431341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.431343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:58:48 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-18:58:48.431345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 18:58:49 compute-0 nova_compute[348325]: 2025-12-03 18:58:49.007 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:58:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 566 KiB/s wr, 84 op/s
Dec  3 18:58:50 compute-0 nova_compute[348325]: 2025-12-03 18:58:50.049 348329 DEBUG nova.network.neutron [req-81e451b3-89b8-433a-8a80-ec35f4eb3a15 req-bac87eea-e23d-4745-becb-110894e25821 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Updated VIF entry in instance network info cache for port 53ab68f2-6888-4d96-9480-47e55e38f422. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:58:50 compute-0 nova_compute[348325]: 2025-12-03 18:58:50.050 348329 DEBUG nova.network.neutron [req-81e451b3-89b8-433a-8a80-ec35f4eb3a15 req-bac87eea-e23d-4745-becb-110894e25821 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Updating instance_info_cache with network_info: [{"id": "53ab68f2-6888-4d96-9480-47e55e38f422", "address": "fa:16:3e:a6:0c:ea", "network": {"id": "dbd0831a-c570-4257-bca6-ab48802d60d7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-940353231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e342f56e114484b986071d1dfb8656a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53ab68f2-68", "ovs_interfaceid": "53ab68f2-6888-4d96-9480-47e55e38f422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:58:50 compute-0 nova_compute[348325]: 2025-12-03 18:58:50.088 348329 DEBUG oslo_concurrency.lockutils [req-81e451b3-89b8-433a-8a80-ec35f4eb3a15 req-bac87eea-e23d-4745-becb-110894e25821 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:58:50 compute-0 ovn_controller[89305]: 2025-12-03T18:58:50Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:33:2c 10.100.0.11
Dec  3 18:58:50 compute-0 ovn_controller[89305]: 2025-12-03T18:58:50Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:33:2c 10.100.0.11
Dec  3 18:58:50 compute-0 podman[445250]: 2025-12-03 18:58:50.92706012 +0000 UTC m=+0.090105080 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 18:58:50 compute-0 podman[445251]: 2025-12-03 18:58:50.955304901 +0000 UTC m=+0.106743658 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 18:58:50 compute-0 podman[445249]: 2025-12-03 18:58:50.967051707 +0000 UTC m=+0.135396377 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, version=9.4, io.openshift.tags=base rhel9, vcs-type=git, name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container)
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.036 348329 DEBUG nova.network.neutron [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Updating instance_info_cache with network_info: [{"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.050 348329 DEBUG oslo_concurrency.lockutils [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Releasing lock "refresh_cache-eff2304f-0e67-4c93-ae65-20d4ddb87625" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.052 348329 DEBUG nova.compute.manager [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:51 compute-0 kernel: tapb709b4ab-58 (unregistering): left promiscuous mode
Dec  3 18:58:51 compute-0 NetworkManager[49087]: <info>  [1764788331.2471] device (tapb709b4ab-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 18:58:51 compute-0 ovn_controller[89305]: 2025-12-03T18:58:51Z|00119|binding|INFO|Releasing lport b709b4ab-585a-4aed-9f06-3c9650d54c09 from this chassis (sb_readonly=0)
Dec  3 18:58:51 compute-0 ovn_controller[89305]: 2025-12-03T18:58:51Z|00120|binding|INFO|Setting lport b709b4ab-585a-4aed-9f06-3c9650d54c09 down in Southbound
Dec  3 18:58:51 compute-0 ovn_controller[89305]: 2025-12-03T18:58:51Z|00121|binding|INFO|Removing iface tapb709b4ab-58 ovn-installed in OVS
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.267 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:88:19 10.100.0.3'], port_security=['fa:16:3e:6e:88:19 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'eff2304f-0e67-4c93-ae65-20d4ddb87625', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1bc217751704d588f690e1b293cade8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1a397ab-712e-407d-b87f-48e90c61a0b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f565d4f-7cf7-4751-884a-5071b91cf9b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=b709b4ab-585a-4aed-9f06-3c9650d54c09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.269 286999 INFO neutron.agent.ovn.metadata.agent [-] Port b709b4ab-585a-4aed-9f06-3c9650d54c09 in datapath c136d05b-f7ca-4f17-81e0-62c23fcd54a3 unbound from our chassis#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.268 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.273 286999 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c136d05b-f7ca-4f17-81e0-62c23fcd54a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.274 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[06eddc8a-0f42-459a-ae8b-9fa52676c45c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.275 286999 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3 namespace which is not needed anymore#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.282 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:51 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  3 18:58:51 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 42.605s CPU time.
Dec  3 18:58:51 compute-0 systemd-machined[138702]: Machine qemu-7-instance-00000007 terminated.
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.374 348329 INFO nova.virt.libvirt.driver [-] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Instance destroyed successfully.#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.377 348329 DEBUG nova.objects.instance [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lazy-loading 'resources' on Instance uuid eff2304f-0e67-4c93-ae65-20d4ddb87625 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.394 348329 DEBUG nova.virt.libvirt.vif [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:57:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-348328150',display_name='tempest-ServerActionsTestJSON-server-348328150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-348328150',id=7,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNxroBr01vWxkEduOl12EjIwANWWsGp13Iiwr/acK4U//64QHGHGw2vAnRgMxYbVrlypsXXM19ulgGJI9w1cDLg4nw16FcL2L12/Hkr+U1wJ9evpJospwbYXLvOwZj+bkQ==',key_name='tempest-keypair-1397268451',keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b1bc217751704d588f690e1b293cade8',ramdisk_id='',reservation_id='r-2us6ueb7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2101343937',owner_user_name='tempest-ServerActionsTestJSON-2101343937-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:58:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a7a79cf3930c41baa4cb453d75b59c70',uuid=eff2304f-0e67-4c93-ae65-20d4ddb87625,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.395 348329 DEBUG nova.network.os_vif_util [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converting VIF {"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.396 348329 DEBUG nova.network.os_vif_util [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.397 348329 DEBUG os_vif [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.399 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.399 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb709b4ab-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.405 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.408 348329 INFO os_vif [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58')#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.416 348329 DEBUG nova.virt.libvirt.driver [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Start _get_guest_xml network_info=[{"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '55982930-937b-484e-96ee-69e406a48023'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.422 348329 WARNING nova.virt.libvirt.driver [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.432 348329 DEBUG nova.virt.libvirt.host [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.433 348329 DEBUG nova.virt.libvirt.host [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.436 348329 DEBUG nova.virt.libvirt.host [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.437 348329 DEBUG nova.virt.libvirt.host [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.438 348329 DEBUG nova.virt.libvirt.driver [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.438 348329 DEBUG nova.virt.hardware [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:56:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a94cfbfb-a20a-4689-ac91-e7436db75880',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.440 348329 DEBUG nova.virt.hardware [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.440 348329 DEBUG nova.virt.hardware [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.441 348329 DEBUG nova.virt.hardware [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.441 348329 DEBUG nova.virt.hardware [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.442 348329 DEBUG nova.virt.hardware [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.442 348329 DEBUG nova.virt.hardware [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.443 348329 DEBUG nova.virt.hardware [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.443 348329 DEBUG nova.virt.hardware [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:58:51 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[442097]: [NOTICE]   (442103) : haproxy version is 2.8.14-c23fe91
Dec  3 18:58:51 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[442097]: [NOTICE]   (442103) : path to executable is /usr/sbin/haproxy
Dec  3 18:58:51 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[442097]: [WARNING]  (442103) : Exiting Master process...
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.445 348329 DEBUG nova.virt.hardware [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.445 348329 DEBUG nova.virt.hardware [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.446 348329 DEBUG nova.objects.instance [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lazy-loading 'vcpu_model' on Instance uuid eff2304f-0e67-4c93-ae65-20d4ddb87625 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:58:51 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[442097]: [ALERT]    (442103) : Current worker (442105) exited with code 143 (Terminated)
Dec  3 18:58:51 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[442097]: [WARNING]  (442103) : All workers exited. Exiting... (0)
Dec  3 18:58:51 compute-0 systemd[1]: libpod-a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8.scope: Deactivated successfully.
Dec  3 18:58:51 compute-0 podman[445331]: 2025-12-03 18:58:51.455529394 +0000 UTC m=+0.059478894 container died a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.482 348329 DEBUG oslo_concurrency.processutils [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8-userdata-shm.mount: Deactivated successfully.
Dec  3 18:58:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f92c0a4e1a9e9c26c45ab8077b64b27dad3be736ad48cf300bfd2cb85db47dab-merged.mount: Deactivated successfully.
Dec  3 18:58:51 compute-0 podman[445331]: 2025-12-03 18:58:51.547685224 +0000 UTC m=+0.151634724 container cleanup a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:58:51 compute-0 systemd[1]: libpod-conmon-a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8.scope: Deactivated successfully.
Dec  3 18:58:51 compute-0 podman[445359]: 2025-12-03 18:58:51.629986522 +0000 UTC m=+0.055294830 container remove a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.638 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[00b07774-ff80-49e2-a24f-c11b8a2bc66d]: (4, ('Wed Dec  3 06:58:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3 (a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8)\na3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8\nWed Dec  3 06:58:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3 (a3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8)\na3c2606a0b78937c6a810e1f81ba2b507ea15546afe25e9205232afeac2be0a8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.641 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[faaa77fa-e634-4ab5-814e-68acdc4488cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.642 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc136d05b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.645 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:51 compute-0 kernel: tapc136d05b-f0: left promiscuous mode
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.655 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd0170d-d30c-43b7-a7b8-d9ee0ada7112]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:51 compute-0 nova_compute[348325]: 2025-12-03 18:58:51.668 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.670 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[b95576aa-7b17-4342-be6a-d696d44ffa45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.671 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[b61f3bc2-b745-4a45-8512-89b62a53c475]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.687 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[eabeaa05-ed0b-47df-bb19-51718fa40235]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653311, 'reachable_time': 29874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445390, 'error': None, 'target': 'ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:51 compute-0 systemd[1]: run-netns-ovnmeta\x2dc136d05b\x2df7ca\x2d4f17\x2d81e0\x2d62c23fcd54a3.mount: Deactivated successfully.
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.690 287110 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 18:58:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:51.690 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[22eb67b8-5648-469d-926a-3fc04fd6b012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 247 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 105 op/s
Dec  3 18:58:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:58:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/671894933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.053 348329 DEBUG oslo_concurrency.processutils [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.112 348329 DEBUG oslo_concurrency.processutils [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:58:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:58:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1803862862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.560 348329 DEBUG oslo_concurrency.processutils [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.564 348329 DEBUG nova.virt.libvirt.vif [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:57:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-348328150',display_name='tempest-ServerActionsTestJSON-server-348328150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-348328150',id=7,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNxroBr01vWxkEduOl12EjIwANWWsGp13Iiwr/acK4U//64QHGHGw2vAnRgMxYbVrlypsXXM19ulgGJI9w1cDLg4nw16FcL2L12/Hkr+U1wJ9evpJospwbYXLvOwZj+bkQ==',key_name='tempest-keypair-1397268451',keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b1bc217751704d588f690e1b293cade8',ramdisk_id='',reservation_id='r-2us6ueb7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2101343937',owner_user_name='tempest-ServerActionsTestJSON-2101343937-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:58:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a7a79cf3930c41baa4cb453d75b59c70',uuid=eff2304f-0e67-4c93-ae65-20d4ddb87625,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.565 348329 DEBUG nova.network.os_vif_util [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converting VIF {"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.568 348329 DEBUG nova.network.os_vif_util [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.571 348329 DEBUG nova.objects.instance [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lazy-loading 'pci_devices' on Instance uuid eff2304f-0e67-4c93-ae65-20d4ddb87625 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.629 348329 DEBUG nova.virt.libvirt.driver [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:58:52 compute-0 nova_compute[348325]:  <uuid>eff2304f-0e67-4c93-ae65-20d4ddb87625</uuid>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  <name>instance-00000007</name>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  <memory>131072</memory>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <nova:name>tempest-ServerActionsTestJSON-server-348328150</nova:name>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:58:51</nova:creationTime>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <nova:flavor name="m1.nano">
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <nova:memory>128</nova:memory>
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <nova:user uuid="a7a79cf3930c41baa4cb453d75b59c70">tempest-ServerActionsTestJSON-2101343937-project-member</nova:user>
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <nova:project uuid="b1bc217751704d588f690e1b293cade8">tempest-ServerActionsTestJSON-2101343937</nova:project>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="55982930-937b-484e-96ee-69e406a48023"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <nova:port uuid="b709b4ab-585a-4aed-9f06-3c9650d54c09">
Dec  3 18:58:52 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <system>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <entry name="serial">eff2304f-0e67-4c93-ae65-20d4ddb87625</entry>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <entry name="uuid">eff2304f-0e67-4c93-ae65-20d4ddb87625</entry>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    </system>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  <os>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  </os>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  <features>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  </features>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/eff2304f-0e67-4c93-ae65-20d4ddb87625_disk">
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      </source>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/eff2304f-0e67-4c93-ae65-20d4ddb87625_disk.config">
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      </source>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:58:52 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:6e:88:19"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <target dev="tapb709b4ab-58"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/eff2304f-0e67-4c93-ae65-20d4ddb87625/console.log" append="off"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <video>
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    </video>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <input type="keyboard" bus="usb"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:58:52 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:58:52 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:58:52 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:58:52 compute-0 nova_compute[348325]: </domain>
Dec  3 18:58:52 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.632 348329 DEBUG nova.virt.libvirt.driver [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.633 348329 DEBUG nova.virt.libvirt.driver [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.636 348329 DEBUG nova.virt.libvirt.vif [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:57:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-348328150',display_name='tempest-ServerActionsTestJSON-server-348328150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-348328150',id=7,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNxroBr01vWxkEduOl12EjIwANWWsGp13Iiwr/acK4U//64QHGHGw2vAnRgMxYbVrlypsXXM19ulgGJI9w1cDLg4nw16FcL2L12/Hkr+U1wJ9evpJospwbYXLvOwZj+bkQ==',key_name='tempest-keypair-1397268451',keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='b1bc217751704d588f690e1b293cade8',ramdisk_id='',reservation_id='r-2us6ueb7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2101343937',owner_user_name='tempest-ServerActionsTestJSON-2101343937-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:58:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a7a79cf3930c41baa4cb453d75b59c70',uuid=eff2304f-0e67-4c93-ae65-20d4ddb87625,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.637 348329 DEBUG nova.network.os_vif_util [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converting VIF {"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.640 348329 DEBUG nova.network.os_vif_util [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.641 348329 DEBUG os_vif [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.643 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.644 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.645 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.650 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.651 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb709b4ab-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.652 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb709b4ab-58, col_values=(('external_ids', {'iface-id': 'b709b4ab-585a-4aed-9f06-3c9650d54c09', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:88:19', 'vm-uuid': 'eff2304f-0e67-4c93-ae65-20d4ddb87625'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.654 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:52 compute-0 NetworkManager[49087]: <info>  [1764788332.6570] manager: (tapb709b4ab-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.658 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.662 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.663 348329 INFO os_vif [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58')#033[00m
Dec  3 18:58:52 compute-0 kernel: tapb709b4ab-58: entered promiscuous mode
Dec  3 18:58:52 compute-0 systemd-udevd[445303]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:58:52 compute-0 ovn_controller[89305]: 2025-12-03T18:58:52Z|00122|binding|INFO|Claiming lport b709b4ab-585a-4aed-9f06-3c9650d54c09 for this chassis.
Dec  3 18:58:52 compute-0 ovn_controller[89305]: 2025-12-03T18:58:52Z|00123|binding|INFO|b709b4ab-585a-4aed-9f06-3c9650d54c09: Claiming fa:16:3e:6e:88:19 10.100.0.3
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.762 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:52 compute-0 NetworkManager[49087]: <info>  [1764788332.7662] manager: (tapb709b4ab-58): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.768 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:88:19 10.100.0.3'], port_security=['fa:16:3e:6e:88:19 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'eff2304f-0e67-4c93-ae65-20d4ddb87625', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1bc217751704d588f690e1b293cade8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1a397ab-712e-407d-b87f-48e90c61a0b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f565d4f-7cf7-4751-884a-5071b91cf9b2, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=b709b4ab-585a-4aed-9f06-3c9650d54c09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.769 286999 INFO neutron.agent.ovn.metadata.agent [-] Port b709b4ab-585a-4aed-9f06-3c9650d54c09 in datapath c136d05b-f7ca-4f17-81e0-62c23fcd54a3 bound to our chassis#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.771 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c136d05b-f7ca-4f17-81e0-62c23fcd54a3#033[00m
Dec  3 18:58:52 compute-0 ovn_controller[89305]: 2025-12-03T18:58:52Z|00124|binding|INFO|Setting lport b709b4ab-585a-4aed-9f06-3c9650d54c09 ovn-installed in OVS
Dec  3 18:58:52 compute-0 ovn_controller[89305]: 2025-12-03T18:58:52Z|00125|binding|INFO|Setting lport b709b4ab-585a-4aed-9f06-3c9650d54c09 up in Southbound
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.781 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:52 compute-0 NetworkManager[49087]: <info>  [1764788332.7824] device (tapb709b4ab-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:58:52 compute-0 nova_compute[348325]: 2025-12-03 18:58:52.783 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:52 compute-0 NetworkManager[49087]: <info>  [1764788332.7875] device (tapb709b4ab-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.786 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f5575eed-5b10-4bd2-9be5-9c3152a373af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.787 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc136d05b-f1 in ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.790 411759 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc136d05b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.791 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[6df92527-cf82-41c3-b3e1-f17ad77b7f65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.792 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[76be7cee-4391-4740-bc95-db656c7bce1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.808 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a1ffac-0b9f-4ce8-acbf-966379be3fce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.828 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[5d506fb5-cbb7-4dbd-b523-2fccc1e28b0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:52 compute-0 systemd-machined[138702]: New machine qemu-12-instance-00000007.
Dec  3 18:58:52 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000007.
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.866 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[678613f4-54bf-428e-a03b-c063d4fbb51f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.873 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[6306319a-3b1d-487c-a91e-34c9f8bdc7c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:52 compute-0 NetworkManager[49087]: <info>  [1764788332.8776] manager: (tapc136d05b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.908 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[541df868-b435-4eba-815e-1d3e612583c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.913 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[16f5d238-66de-4416-b015-f1dc48621b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:52 compute-0 NetworkManager[49087]: <info>  [1764788332.9393] device (tapc136d05b-f0): carrier: link connected
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.945 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a6c945-d3c8-4dab-a1b5-df4f5b6a14e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.969 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1dec3b-d7aa-4749-97b1-34d6bae66a79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc136d05b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:79:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661287, 'reachable_time': 31770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445477, 'error': None, 'target': 'ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:52 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:52.985 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e866e504-8ecd-4c2d-b6d1-eeff2fd3b250]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:79cb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661287, 'tstamp': 661287}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 445479, 'error': None, 'target': 'ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:53.007 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f6fbfbf1-ccca-4d88-8e5c-1c93ae4e8d6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc136d05b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:79:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661287, 'reachable_time': 31770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 445480, 'error': None, 'target': 'ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:53.051 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f908a0cd-0638-4dcc-81cd-783c4cebef42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:53.127 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[be2a1048-7455-4ad3-81ca-2ad6dc430563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:53.129 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc136d05b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:53.129 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:53.130 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc136d05b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.132 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:53 compute-0 NetworkManager[49087]: <info>  [1764788333.1335] manager: (tapc136d05b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec  3 18:58:53 compute-0 kernel: tapc136d05b-f0: entered promiscuous mode
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.136 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:53.138 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc136d05b-f0, col_values=(('external_ids', {'iface-id': 'b52268a2-5f2a-45ba-8c23-e32c70c8253f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.140 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:53 compute-0 ovn_controller[89305]: 2025-12-03T18:58:53Z|00126|binding|INFO|Releasing lport b52268a2-5f2a-45ba-8c23-e32c70c8253f from this chassis (sb_readonly=0)
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.155 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:53.156 286999 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c136d05b-f7ca-4f17-81e0-62c23fcd54a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c136d05b-f7ca-4f17-81e0-62c23fcd54a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:53.158 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[132a9100-c9bd-4d24-b514-f8f3a9e13464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:53.159 286999 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: global
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    log         /dev/log local0 debug
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    log-tag     haproxy-metadata-proxy-c136d05b-f7ca-4f17-81e0-62c23fcd54a3
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    user        root
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    group       root
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    maxconn     1024
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    pidfile     /var/lib/neutron/external/pids/c136d05b-f7ca-4f17-81e0-62c23fcd54a3.pid.haproxy
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    daemon
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: defaults
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    log global
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    mode http
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    option httplog
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    option dontlognull
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    option http-server-close
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    option forwardfor
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    retries                 3
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    timeout http-request    30s
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    timeout connect         30s
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    timeout client          32s
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    timeout server          32s
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    timeout http-keep-alive 30s
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: listen listener
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    bind 169.254.169.254:80
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]:    http-request add-header X-OVN-Network-ID c136d05b-f7ca-4f17-81e0-62c23fcd54a3
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 18:58:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:58:53.161 286999 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'env', 'PROCESS_TAG=haproxy-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c136d05b-f7ca-4f17-81e0-62c23fcd54a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.402 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.510 348329 DEBUG nova.virt.libvirt.host [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Removed pending event for eff2304f-0e67-4c93-ae65-20d4ddb87625 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.511 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788333.509681, eff2304f-0e67-4c93-ae65-20d4ddb87625 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.512 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.514 348329 DEBUG nova.compute.manager [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.519 348329 INFO nova.virt.libvirt.driver [-] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Instance rebooted successfully.#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.519 348329 DEBUG nova.compute.manager [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.566 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.570 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.597 348329 DEBUG oslo_concurrency.lockutils [None req-6fe909a7-db18-4676-b163-bce1b76ae7cd a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.608 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788333.5098145, eff2304f-0e67-4c93-ae65-20d4ddb87625 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.608 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] VM Started (Lifecycle Event)#033[00m
Dec  3 18:58:53 compute-0 podman[445554]: 2025-12-03 18:58:53.617812814 +0000 UTC m=+0.090216733 container create d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.628 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:58:53 compute-0 nova_compute[348325]: 2025-12-03 18:58:53.634 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:58:53 compute-0 systemd[1]: Started libpod-conmon-d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d.scope.
Dec  3 18:58:53 compute-0 podman[445554]: 2025-12-03 18:58:53.583720452 +0000 UTC m=+0.056124371 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:58:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85783d4a42be2c487872c07ab9bf449768f42659261910cff7385c03cf12a47d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 18:58:53 compute-0 podman[445554]: 2025-12-03 18:58:53.732076594 +0000 UTC m=+0.204480593 container init d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:58:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 260 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Dec  3 18:58:53 compute-0 podman[445554]: 2025-12-03 18:58:53.742365705 +0000 UTC m=+0.214769614 container start d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:58:53 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[445567]: [NOTICE]   (445571) : New worker (445573) forked
Dec  3 18:58:53 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[445567]: [NOTICE]   (445571) : Loading success.
Dec  3 18:58:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:58:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Dec  3 18:58:56 compute-0 nova_compute[348325]: 2025-12-03 18:58:56.944 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:57 compute-0 nova_compute[348325]: 2025-12-03 18:58:57.065 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:57 compute-0 nova_compute[348325]: 2025-12-03 18:58:57.656 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Dec  3 18:58:58 compute-0 nova_compute[348325]: 2025-12-03 18:58:58.407 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:58:59 compute-0 nova_compute[348325]: 2025-12-03 18:58:59.053 348329 DEBUG nova.compute.manager [req-77a4cff2-8805-4109-ac6a-ff5c773a7904 req-f2e2fdae-04d3-42ab-9fbb-66c9fb45a420 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-vif-unplugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:58:59 compute-0 nova_compute[348325]: 2025-12-03 18:58:59.053 348329 DEBUG oslo_concurrency.lockutils [req-77a4cff2-8805-4109-ac6a-ff5c773a7904 req-f2e2fdae-04d3-42ab-9fbb-66c9fb45a420 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:58:59 compute-0 nova_compute[348325]: 2025-12-03 18:58:59.054 348329 DEBUG oslo_concurrency.lockutils [req-77a4cff2-8805-4109-ac6a-ff5c773a7904 req-f2e2fdae-04d3-42ab-9fbb-66c9fb45a420 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:58:59 compute-0 nova_compute[348325]: 2025-12-03 18:58:59.054 348329 DEBUG oslo_concurrency.lockutils [req-77a4cff2-8805-4109-ac6a-ff5c773a7904 req-f2e2fdae-04d3-42ab-9fbb-66c9fb45a420 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:58:59 compute-0 nova_compute[348325]: 2025-12-03 18:58:59.055 348329 DEBUG nova.compute.manager [req-77a4cff2-8805-4109-ac6a-ff5c773a7904 req-f2e2fdae-04d3-42ab-9fbb-66c9fb45a420 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] No waiting events found dispatching network-vif-unplugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:58:59 compute-0 nova_compute[348325]: 2025-12-03 18:58:59.055 348329 WARNING nova.compute.manager [req-77a4cff2-8805-4109-ac6a-ff5c773a7904 req-f2e2fdae-04d3-42ab-9fbb-66c9fb45a420 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received unexpected event network-vif-unplugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 for instance with vm_state active and task_state None.#033[00m
Dec  3 18:58:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:58:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 152 op/s
Dec  3 18:58:59 compute-0 podman[158200]: time="2025-12-03T18:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:58:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46276 "" "Go-http-client/1.1"
Dec  3 18:58:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9581 "" "Go-http-client/1.1"
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.202 348329 DEBUG nova.compute.manager [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.203 348329 DEBUG oslo_concurrency.lockutils [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.203 348329 DEBUG oslo_concurrency.lockutils [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.204 348329 DEBUG oslo_concurrency.lockutils [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.204 348329 DEBUG nova.compute.manager [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] No waiting events found dispatching network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.204 348329 WARNING nova.compute.manager [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received unexpected event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 for instance with vm_state active and task_state None.#033[00m
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.205 348329 DEBUG nova.compute.manager [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.205 348329 DEBUG oslo_concurrency.lockutils [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.206 348329 DEBUG oslo_concurrency.lockutils [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.206 348329 DEBUG oslo_concurrency.lockutils [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.207 348329 DEBUG nova.compute.manager [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] No waiting events found dispatching network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:59:01 compute-0 nova_compute[348325]: 2025-12-03 18:59:01.207 348329 WARNING nova.compute.manager [req-bb577504-4ed1-4cd3-871c-0f5f5fe9a433 req-c92e248b-5f81-4533-b0b1-86c82fe148f8 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received unexpected event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 for instance with vm_state active and task_state None.#033[00m
Dec  3 18:59:01 compute-0 openstack_network_exporter[365222]: ERROR   18:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:59:01 compute-0 openstack_network_exporter[365222]: ERROR   18:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:59:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:59:01 compute-0 openstack_network_exporter[365222]: ERROR   18:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:59:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:59:01 compute-0 openstack_network_exporter[365222]: ERROR   18:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:59:01 compute-0 openstack_network_exporter[365222]: ERROR   18:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:59:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Dec  3 18:59:02 compute-0 nova_compute[348325]: 2025-12-03 18:59:02.660 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:02 compute-0 podman[445582]: 2025-12-03 18:59:02.951040076 +0000 UTC m=+0.106156313 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 18:59:03 compute-0 nova_compute[348325]: 2025-12-03 18:59:03.307 348329 DEBUG nova.compute.manager [req-8ad912cb-0844-4a97-8048-4584123528b3 req-2e8de2e9-2863-440a-91f1-22f1a38e7bc4 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:03 compute-0 nova_compute[348325]: 2025-12-03 18:59:03.308 348329 DEBUG oslo_concurrency.lockutils [req-8ad912cb-0844-4a97-8048-4584123528b3 req-2e8de2e9-2863-440a-91f1-22f1a38e7bc4 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:03 compute-0 nova_compute[348325]: 2025-12-03 18:59:03.309 348329 DEBUG oslo_concurrency.lockutils [req-8ad912cb-0844-4a97-8048-4584123528b3 req-2e8de2e9-2863-440a-91f1-22f1a38e7bc4 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:03 compute-0 nova_compute[348325]: 2025-12-03 18:59:03.310 348329 DEBUG oslo_concurrency.lockutils [req-8ad912cb-0844-4a97-8048-4584123528b3 req-2e8de2e9-2863-440a-91f1-22f1a38e7bc4 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:03 compute-0 nova_compute[348325]: 2025-12-03 18:59:03.311 348329 DEBUG nova.compute.manager [req-8ad912cb-0844-4a97-8048-4584123528b3 req-2e8de2e9-2863-440a-91f1-22f1a38e7bc4 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] No waiting events found dispatching network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:59:03 compute-0 nova_compute[348325]: 2025-12-03 18:59:03.312 348329 WARNING nova.compute.manager [req-8ad912cb-0844-4a97-8048-4584123528b3 req-2e8de2e9-2863-440a-91f1-22f1a38e7bc4 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received unexpected event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 for instance with vm_state active and task_state None.#033[00m
Dec  3 18:59:03 compute-0 nova_compute[348325]: 2025-12-03 18:59:03.408 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1001 KiB/s wr, 107 op/s
Dec  3 18:59:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:04 compute-0 nova_compute[348325]: 2025-12-03 18:59:04.503 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:04 compute-0 podman[445604]: 2025-12-03 18:59:04.90738899 +0000 UTC m=+0.080664250 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute)
Dec  3 18:59:04 compute-0 podman[445603]: 2025-12-03 18:59:04.937985537 +0000 UTC m=+0.115268525 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 18:59:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 20 KiB/s wr, 79 op/s
Dec  3 18:59:06 compute-0 nova_compute[348325]: 2025-12-03 18:59:06.314 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:59:07 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:59:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 18:59:07 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:59:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 18:59:07 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:59:07 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 31fc3ff8-8c71-4df5-a383-31842c349ba3 does not exist
Dec  3 18:59:07 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 27f86432-6cad-43d6-beb9-74ebcfcdcddb does not exist
Dec  3 18:59:07 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3c7e0821-b9ac-48e0-8633-7b0057433dc4 does not exist
Dec  3 18:59:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 18:59:07 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 18:59:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 18:59:07 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:59:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 18:59:07 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 18:59:07 compute-0 nova_compute[348325]: 2025-12-03 18:59:07.663 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 65 op/s
Dec  3 18:59:08 compute-0 podman[445915]: 2025-12-03 18:59:08.039592883 +0000 UTC m=+0.042517309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:59:08 compute-0 nova_compute[348325]: 2025-12-03 18:59:08.205 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:08 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 18:59:08 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:59:08 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 18:59:08 compute-0 podman[445915]: 2025-12-03 18:59:08.349896169 +0000 UTC m=+0.352820575 container create 3b8efea26e3668e0c9b2d87f8ca2b2aeee0a6c428c33fda54cde8bd01caa60f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:59:08 compute-0 nova_compute[348325]: 2025-12-03 18:59:08.410 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:08 compute-0 systemd[1]: Started libpod-conmon-3b8efea26e3668e0c9b2d87f8ca2b2aeee0a6c428c33fda54cde8bd01caa60f8.scope.
Dec  3 18:59:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:59:08 compute-0 podman[445915]: 2025-12-03 18:59:08.569756927 +0000 UTC m=+0.572681363 container init 3b8efea26e3668e0c9b2d87f8ca2b2aeee0a6c428c33fda54cde8bd01caa60f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:59:08 compute-0 podman[445915]: 2025-12-03 18:59:08.58421909 +0000 UTC m=+0.587143496 container start 3b8efea26e3668e0c9b2d87f8ca2b2aeee0a6c428c33fda54cde8bd01caa60f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 18:59:08 compute-0 podman[445915]: 2025-12-03 18:59:08.593141379 +0000 UTC m=+0.596065805 container attach 3b8efea26e3668e0c9b2d87f8ca2b2aeee0a6c428c33fda54cde8bd01caa60f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 18:59:08 compute-0 clever_edison[445929]: 167 167
Dec  3 18:59:08 compute-0 systemd[1]: libpod-3b8efea26e3668e0c9b2d87f8ca2b2aeee0a6c428c33fda54cde8bd01caa60f8.scope: Deactivated successfully.
Dec  3 18:59:08 compute-0 podman[445915]: 2025-12-03 18:59:08.595890916 +0000 UTC m=+0.598815342 container died 3b8efea26e3668e0c9b2d87f8ca2b2aeee0a6c428c33fda54cde8bd01caa60f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:59:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3246f30d9fc23c4edd7be47a79c47c03ab6865e57b73b548d6793bd9a5bb492-merged.mount: Deactivated successfully.
Dec  3 18:59:08 compute-0 podman[445915]: 2025-12-03 18:59:08.680270896 +0000 UTC m=+0.683195302 container remove 3b8efea26e3668e0c9b2d87f8ca2b2aeee0a6c428c33fda54cde8bd01caa60f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 18:59:08 compute-0 systemd[1]: libpod-conmon-3b8efea26e3668e0c9b2d87f8ca2b2aeee0a6c428c33fda54cde8bd01caa60f8.scope: Deactivated successfully.
Dec  3 18:59:08 compute-0 podman[445953]: 2025-12-03 18:59:08.934671857 +0000 UTC m=+0.058541930 container create 5b07b6b9d8d55939210d8456764476bcad2868e25677ec527b83911d517ccc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 18:59:08 compute-0 systemd[1]: Started libpod-conmon-5b07b6b9d8d55939210d8456764476bcad2868e25677ec527b83911d517ccc00.scope.
Dec  3 18:59:09 compute-0 podman[445953]: 2025-12-03 18:59:08.909369679 +0000 UTC m=+0.033239772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:59:09 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11fe7e8bb5ba6f5f367f53510ad95a74a3421111c7cdb3b6dc78afe836c9ba29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11fe7e8bb5ba6f5f367f53510ad95a74a3421111c7cdb3b6dc78afe836c9ba29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11fe7e8bb5ba6f5f367f53510ad95a74a3421111c7cdb3b6dc78afe836c9ba29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11fe7e8bb5ba6f5f367f53510ad95a74a3421111c7cdb3b6dc78afe836c9ba29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11fe7e8bb5ba6f5f367f53510ad95a74a3421111c7cdb3b6dc78afe836c9ba29/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:09 compute-0 podman[445953]: 2025-12-03 18:59:09.083521871 +0000 UTC m=+0.207391974 container init 5b07b6b9d8d55939210d8456764476bcad2868e25677ec527b83911d517ccc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:59:09 compute-0 podman[445953]: 2025-12-03 18:59:09.099199164 +0000 UTC m=+0.223069257 container start 5b07b6b9d8d55939210d8456764476bcad2868e25677ec527b83911d517ccc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec  3 18:59:09 compute-0 podman[445953]: 2025-12-03 18:59:09.105640921 +0000 UTC m=+0.229511044 container attach 5b07b6b9d8d55939210d8456764476bcad2868e25677ec527b83911d517ccc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 18:59:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 55 op/s
Dec  3 18:59:10 compute-0 modest_carson[445969]: --> passed data devices: 0 physical, 3 LVM
Dec  3 18:59:10 compute-0 modest_carson[445969]: --> relative data size: 1.0
Dec  3 18:59:10 compute-0 modest_carson[445969]: --> All data devices are unavailable
Dec  3 18:59:10 compute-0 systemd[1]: libpod-5b07b6b9d8d55939210d8456764476bcad2868e25677ec527b83911d517ccc00.scope: Deactivated successfully.
Dec  3 18:59:10 compute-0 systemd[1]: libpod-5b07b6b9d8d55939210d8456764476bcad2868e25677ec527b83911d517ccc00.scope: Consumed 1.151s CPU time.
Dec  3 18:59:10 compute-0 podman[445998]: 2025-12-03 18:59:10.393838533 +0000 UTC m=+0.048719711 container died 5b07b6b9d8d55939210d8456764476bcad2868e25677ec527b83911d517ccc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 18:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-11fe7e8bb5ba6f5f367f53510ad95a74a3421111c7cdb3b6dc78afe836c9ba29-merged.mount: Deactivated successfully.
Dec  3 18:59:10 compute-0 podman[445998]: 2025-12-03 18:59:10.468187248 +0000 UTC m=+0.123068416 container remove 5b07b6b9d8d55939210d8456764476bcad2868e25677ec527b83911d517ccc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:59:10 compute-0 systemd[1]: libpod-conmon-5b07b6b9d8d55939210d8456764476bcad2868e25677ec527b83911d517ccc00.scope: Deactivated successfully.
Dec  3 18:59:10 compute-0 nova_compute[348325]: 2025-12-03 18:59:10.993 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:11 compute-0 podman[446150]: 2025-12-03 18:59:11.251854412 +0000 UTC m=+0.024773327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:59:11 compute-0 podman[446150]: 2025-12-03 18:59:11.372843395 +0000 UTC m=+0.145762290 container create ece950b517eea698a2decef4fb76878134570ed4f12c4222556480b32cdbc205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khorana, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 18:59:11 compute-0 systemd[1]: Started libpod-conmon-ece950b517eea698a2decef4fb76878134570ed4f12c4222556480b32cdbc205.scope.
Dec  3 18:59:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:59:11 compute-0 podman[446150]: 2025-12-03 18:59:11.536627874 +0000 UTC m=+0.309546809 container init ece950b517eea698a2decef4fb76878134570ed4f12c4222556480b32cdbc205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khorana, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 18:59:11 compute-0 podman[446150]: 2025-12-03 18:59:11.547138621 +0000 UTC m=+0.320057526 container start ece950b517eea698a2decef4fb76878134570ed4f12c4222556480b32cdbc205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khorana, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:59:11 compute-0 interesting_khorana[446166]: 167 167
Dec  3 18:59:11 compute-0 systemd[1]: libpod-ece950b517eea698a2decef4fb76878134570ed4f12c4222556480b32cdbc205.scope: Deactivated successfully.
Dec  3 18:59:11 compute-0 conmon[446166]: conmon ece950b517eea698a2de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ece950b517eea698a2decef4fb76878134570ed4f12c4222556480b32cdbc205.scope/container/memory.events
Dec  3 18:59:11 compute-0 podman[446150]: 2025-12-03 18:59:11.57372788 +0000 UTC m=+0.346646835 container attach ece950b517eea698a2decef4fb76878134570ed4f12c4222556480b32cdbc205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khorana, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 18:59:11 compute-0 podman[446150]: 2025-12-03 18:59:11.576119898 +0000 UTC m=+0.349038803 container died ece950b517eea698a2decef4fb76878134570ed4f12c4222556480b32cdbc205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khorana, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 18:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-95b8e5ad3d8a6df89e3c19325716e3888d39425cbcee5116b36e92965c40f218-merged.mount: Deactivated successfully.
Dec  3 18:59:11 compute-0 podman[446150]: 2025-12-03 18:59:11.62370033 +0000 UTC m=+0.396619215 container remove ece950b517eea698a2decef4fb76878134570ed4f12c4222556480b32cdbc205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:59:11 compute-0 systemd[1]: libpod-conmon-ece950b517eea698a2decef4fb76878134570ed4f12c4222556480b32cdbc205.scope: Deactivated successfully.
Dec  3 18:59:11 compute-0 nova_compute[348325]: 2025-12-03 18:59:11.741 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 646 KiB/s rd, 85 B/s wr, 20 op/s
Dec  3 18:59:11 compute-0 podman[446189]: 2025-12-03 18:59:11.832906758 +0000 UTC m=+0.051546980 container create 9176607bbe194e62cec13984202ef71f805a97880d941a1bfb7b26f37ff3aad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_payne, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:59:11 compute-0 systemd[1]: Started libpod-conmon-9176607bbe194e62cec13984202ef71f805a97880d941a1bfb7b26f37ff3aad5.scope.
Dec  3 18:59:11 compute-0 podman[446189]: 2025-12-03 18:59:11.813232788 +0000 UTC m=+0.031873110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:59:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35be790edbc9a32e05f040ee6d96e056ddf9b4ea18a11970267ac7472f1cdd1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35be790edbc9a32e05f040ee6d96e056ddf9b4ea18a11970267ac7472f1cdd1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35be790edbc9a32e05f040ee6d96e056ddf9b4ea18a11970267ac7472f1cdd1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35be790edbc9a32e05f040ee6d96e056ddf9b4ea18a11970267ac7472f1cdd1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:11 compute-0 podman[446189]: 2025-12-03 18:59:11.963340942 +0000 UTC m=+0.181981184 container init 9176607bbe194e62cec13984202ef71f805a97880d941a1bfb7b26f37ff3aad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:59:11 compute-0 podman[446189]: 2025-12-03 18:59:11.9821022 +0000 UTC m=+0.200742432 container start 9176607bbe194e62cec13984202ef71f805a97880d941a1bfb7b26f37ff3aad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_payne, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 18:59:11 compute-0 podman[446189]: 2025-12-03 18:59:11.986856567 +0000 UTC m=+0.205496789 container attach 9176607bbe194e62cec13984202ef71f805a97880d941a1bfb7b26f37ff3aad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_payne, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 18:59:12 compute-0 nova_compute[348325]: 2025-12-03 18:59:12.666 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:12 compute-0 blissful_payne[446206]: {
Dec  3 18:59:12 compute-0 blissful_payne[446206]:    "0": [
Dec  3 18:59:12 compute-0 blissful_payne[446206]:        {
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "devices": [
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "/dev/loop3"
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            ],
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_name": "ceph_lv0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_size": "21470642176",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "name": "ceph_lv0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "tags": {
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.cluster_name": "ceph",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.crush_device_class": "",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.encrypted": "0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.osd_id": "0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.type": "block",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.vdo": "0"
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            },
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "type": "block",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "vg_name": "ceph_vg0"
Dec  3 18:59:12 compute-0 blissful_payne[446206]:        }
Dec  3 18:59:12 compute-0 blissful_payne[446206]:    ],
Dec  3 18:59:12 compute-0 blissful_payne[446206]:    "1": [
Dec  3 18:59:12 compute-0 blissful_payne[446206]:        {
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "devices": [
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "/dev/loop4"
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            ],
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_name": "ceph_lv1",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_size": "21470642176",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "name": "ceph_lv1",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "tags": {
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.cluster_name": "ceph",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.crush_device_class": "",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.encrypted": "0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.osd_id": "1",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.type": "block",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.vdo": "0"
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            },
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "type": "block",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "vg_name": "ceph_vg1"
Dec  3 18:59:12 compute-0 blissful_payne[446206]:        }
Dec  3 18:59:12 compute-0 blissful_payne[446206]:    ],
Dec  3 18:59:12 compute-0 blissful_payne[446206]:    "2": [
Dec  3 18:59:12 compute-0 blissful_payne[446206]:        {
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "devices": [
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "/dev/loop5"
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            ],
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_name": "ceph_lv2",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_size": "21470642176",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "name": "ceph_lv2",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "tags": {
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.cephx_lockbox_secret": "",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.cluster_name": "ceph",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.crush_device_class": "",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.encrypted": "0",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.osd_id": "2",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.type": "block",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:                "ceph.vdo": "0"
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            },
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "type": "block",
Dec  3 18:59:12 compute-0 blissful_payne[446206]:            "vg_name": "ceph_vg2"
Dec  3 18:59:12 compute-0 blissful_payne[446206]:        }
Dec  3 18:59:12 compute-0 blissful_payne[446206]:    ]
Dec  3 18:59:12 compute-0 blissful_payne[446206]: }
Dec  3 18:59:12 compute-0 systemd[1]: libpod-9176607bbe194e62cec13984202ef71f805a97880d941a1bfb7b26f37ff3aad5.scope: Deactivated successfully.
Dec  3 18:59:12 compute-0 podman[446189]: 2025-12-03 18:59:12.797085898 +0000 UTC m=+1.015726130 container died 9176607bbe194e62cec13984202ef71f805a97880d941a1bfb7b26f37ff3aad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_payne, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 18:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-35be790edbc9a32e05f040ee6d96e056ddf9b4ea18a11970267ac7472f1cdd1c-merged.mount: Deactivated successfully.
Dec  3 18:59:13 compute-0 podman[446189]: 2025-12-03 18:59:13.200067987 +0000 UTC m=+1.418708219 container remove 9176607bbe194e62cec13984202ef71f805a97880d941a1bfb7b26f37ff3aad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 18:59:13 compute-0 systemd[1]: libpod-conmon-9176607bbe194e62cec13984202ef71f805a97880d941a1bfb7b26f37ff3aad5.scope: Deactivated successfully.
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.253 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.254 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.261 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 18:59:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:13.262 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 18:59:13 compute-0 nova_compute[348325]: 2025-12-03 18:59:13.413 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_18:59:13
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'vms', 'backups', 'volumes', 'cephfs.cephfs.meta', 'images']
Dec  3 18:59:13 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 18:59:14 compute-0 podman[446365]: 2025-12-03 18:59:14.036033687 +0000 UTC m=+0.065191122 container create 0a61b661f4ae5b819b2df16bfee9bb3ba9906eb0d69bddc7153e124e85fb0d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 18:59:14 compute-0 systemd[1]: Started libpod-conmon-0a61b661f4ae5b819b2df16bfee9bb3ba9906eb0d69bddc7153e124e85fb0d9a.scope.
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.092 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2083 Content-Type: application/json Date: Wed, 03 Dec 2025 18:59:13 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-195a3f39-438e-4219-9a93-8bcb8d8b49e7 x-openstack-request-id: req-195a3f39-438e-4219-9a93-8bcb8d8b49e7 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.093 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb", "name": "tempest-TestServerBasicOps-server-2083585917", "status": "ACTIVE", "tenant_id": "0e342f56e114484b986071d1dfb8656a", "user_id": "d3387836400c4ffa96fc7c863361df79", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "7d2db97d964e78ad4ccdc3ec651e5125bfcdb7b93a3ea1c2010e91ba", "image": {"id": "55982930-937b-484e-96ee-69e406a48023", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/55982930-937b-484e-96ee-69e406a48023"}]}, "flavor": {"id": "a94cfbfb-a20a-4689-ac91-e7436db75880", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a94cfbfb-a20a-4689-ac91-e7436db75880"}]}, "created": "2025-12-03T18:58:29Z", "updated": "2025-12-03T18:58:42Z", "addresses": {"tempest-TestServerBasicOps-940353231-network": [{"version": 4, "addr": "10.100.0.3", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a6:0c:ea"}, {"version": 4, "addr": "192.168.122.181", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a6:0c:ea"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-2099261388", "OS-SRV-USG:launched_at": "2025-12-03T18:58:42.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1153143234"}, {"name": "tempest-secgroup-smoke-774542994"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.093 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb used request id req-195a3f39-438e-4219-9a93-8bcb8d8b49e7 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.094 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4e045c2f-f0fd-4171-b724-3e38bd7ec4eb', 'name': 'tempest-TestServerBasicOps-server-2083585917', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '55982930-937b-484e-96ee-69e406a48023'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0e342f56e114484b986071d1dfb8656a', 'user_id': 'd3387836400c4ffa96fc7c863361df79', 'hostId': '7d2db97d964e78ad4ccdc3ec651e5125bfcdb7b93a3ea1c2010e91ba', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.096 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance c9937213-8842-4393-90b0-edb363037633 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.097 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/c9937213-8842-4393-90b0-edb363037633 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 18:59:14 compute-0 podman[446365]: 2025-12-03 18:59:14.014904201 +0000 UTC m=+0.044061656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:59:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:59:14 compute-0 podman[446365]: 2025-12-03 18:59:14.148949224 +0000 UTC m=+0.178106709 container init 0a61b661f4ae5b819b2df16bfee9bb3ba9906eb0d69bddc7153e124e85fb0d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 18:59:14 compute-0 podman[446365]: 2025-12-03 18:59:14.16024702 +0000 UTC m=+0.189404485 container start 0a61b661f4ae5b819b2df16bfee9bb3ba9906eb0d69bddc7153e124e85fb0d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 18:59:14 compute-0 vigorous_curie[446378]: 167 167
Dec  3 18:59:14 compute-0 podman[446365]: 2025-12-03 18:59:14.167399344 +0000 UTC m=+0.196556809 container attach 0a61b661f4ae5b819b2df16bfee9bb3ba9906eb0d69bddc7153e124e85fb0d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Dec  3 18:59:14 compute-0 systemd[1]: libpod-0a61b661f4ae5b819b2df16bfee9bb3ba9906eb0d69bddc7153e124e85fb0d9a.scope: Deactivated successfully.
Dec  3 18:59:14 compute-0 podman[446365]: 2025-12-03 18:59:14.169629439 +0000 UTC m=+0.198786954 container died 0a61b661f4ae5b819b2df16bfee9bb3ba9906eb0d69bddc7153e124e85fb0d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 18:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9572a40bd9b6faad17e4e334578f805023696f86584395945762ea725253ed86-merged.mount: Deactivated successfully.
Dec  3 18:59:14 compute-0 podman[446365]: 2025-12-03 18:59:14.228963028 +0000 UTC m=+0.258120463 container remove 0a61b661f4ae5b819b2df16bfee9bb3ba9906eb0d69bddc7153e124e85fb0d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 18:59:14 compute-0 systemd[1]: libpod-conmon-0a61b661f4ae5b819b2df16bfee9bb3ba9906eb0d69bddc7153e124e85fb0d9a.scope: Deactivated successfully.
Dec  3 18:59:14 compute-0 podman[446403]: 2025-12-03 18:59:14.430368705 +0000 UTC m=+0.043747589 container create ce0524f7719bca32df381a4213d606de06c6ddbf6b7b525750027fe9d717a1b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 18:59:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:14 compute-0 systemd[1]: Started libpod-conmon-ce0524f7719bca32df381a4213d606de06c6ddbf6b7b525750027fe9d717a1b5.scope.
Dec  3 18:59:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f559a47691031d2fdabf185a42d474b202065fdc7a1ca29d55956dada4023088/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f559a47691031d2fdabf185a42d474b202065fdc7a1ca29d55956dada4023088/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f559a47691031d2fdabf185a42d474b202065fdc7a1ca29d55956dada4023088/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f559a47691031d2fdabf185a42d474b202065fdc7a1ca29d55956dada4023088/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:14 compute-0 podman[446403]: 2025-12-03 18:59:14.408948242 +0000 UTC m=+0.022327146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 18:59:14 compute-0 podman[446403]: 2025-12-03 18:59:14.518264531 +0000 UTC m=+0.131643425 container init ce0524f7719bca32df381a4213d606de06c6ddbf6b7b525750027fe9d717a1b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 18:59:14 compute-0 podman[446403]: 2025-12-03 18:59:14.531715599 +0000 UTC m=+0.145094473 container start ce0524f7719bca32df381a4213d606de06c6ddbf6b7b525750027fe9d717a1b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 18:59:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 18:59:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:59:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 18:59:14 compute-0 podman[446403]: 2025-12-03 18:59:14.536186848 +0000 UTC m=+0.149565722 container attach ce0524f7719bca32df381a4213d606de06c6ddbf6b7b525750027fe9d717a1b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 18:59:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 18:59:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:59:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:59:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 18:59:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:59:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 18:59:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.644 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1997 Content-Type: application/json Date: Wed, 03 Dec 2025 18:59:14 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-01d15d5e-cb7e-43f7-8adc-655131f44beb x-openstack-request-id: req-01d15d5e-cb7e-43f7-8adc-655131f44beb _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.645 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "c9937213-8842-4393-90b0-edb363037633", "name": "tempest-AttachInterfacesUnderV243Test-server-1449486284", "status": "ACTIVE", "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "user_id": "78734fd37e3f4665b1cb2cbcba2e9f65", "metadata": {}, "hostId": "8e6c86cb332c6b27de4ca27e8f79e722ea3ba96a94cf58f7d01fe44e", "image": {"id": "55982930-937b-484e-96ee-69e406a48023", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/55982930-937b-484e-96ee-69e406a48023"}]}, "flavor": {"id": "a94cfbfb-a20a-4689-ac91-e7436db75880", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a94cfbfb-a20a-4689-ac91-e7436db75880"}]}, "created": "2025-12-03T18:57:58Z", "updated": "2025-12-03T18:58:14Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-282035089-network": [{"version": 4, "addr": "10.100.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7c:33:2c"}, {"version": 4, "addr": "192.168.122.196", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7c:33:2c"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/c9937213-8842-4393-90b0-edb363037633"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/c9937213-8842-4393-90b0-edb363037633"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1738864924", "OS-SRV-USG:launched_at": "2025-12-03T18:58:14.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1882739118"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.645 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/c9937213-8842-4393-90b0-edb363037633 used request id req-01d15d5e-cb7e-43f7-8adc-655131f44beb request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.647 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c9937213-8842-4393-90b0-edb363037633', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1449486284', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '55982930-937b-484e-96ee-69e406a48023'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '82b2746c38174502bdcb70a8ab378edf', 'user_id': '78734fd37e3f4665b1cb2cbcba2e9f65', 'hostId': '8e6c86cb332c6b27de4ca27e8f79e722ea3ba96a94cf58f7d01fe44e', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.649 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance eff2304f-0e67-4c93-ae65-20d4ddb87625 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 18:59:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:14.650 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/eff2304f-0e67-4c93-ae65-20d4ddb87625 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.168 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1979 Content-Type: application/json Date: Wed, 03 Dec 2025 18:59:14 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-be3cc783-48b9-4640-8093-ff93469ce444 x-openstack-request-id: req-be3cc783-48b9-4640-8093-ff93469ce444 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.169 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "eff2304f-0e67-4c93-ae65-20d4ddb87625", "name": "tempest-ServerActionsTestJSON-server-348328150", "status": "ACTIVE", "tenant_id": "b1bc217751704d588f690e1b293cade8", "user_id": "a7a79cf3930c41baa4cb453d75b59c70", "metadata": {}, "hostId": "eff85ee4b2160d8d47c77051713ec45837df0c11964f1f840b18cdf4", "image": {"id": "55982930-937b-484e-96ee-69e406a48023", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/55982930-937b-484e-96ee-69e406a48023"}]}, "flavor": {"id": "a94cfbfb-a20a-4689-ac91-e7436db75880", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a94cfbfb-a20a-4689-ac91-e7436db75880"}]}, "created": "2025-12-03T18:57:23Z", "updated": "2025-12-03T18:58:53Z", "addresses": {"tempest-ServerActionsTestJSON-203684476-network": [{"version": 4, "addr": "10.100.0.3", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6e:88:19"}, {"version": 4, "addr": "192.168.122.232", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6e:88:19"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/eff2304f-0e67-4c93-ae65-20d4ddb87625"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/eff2304f-0e67-4c93-ae65-20d4ddb87625"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1397268451", "OS-SRV-USG:launched_at": "2025-12-03T18:57:36.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1548711602"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.169 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/eff2304f-0e67-4c93-ae65-20d4ddb87625 used request id req-be3cc783-48b9-4640-8093-ff93469ce444 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.171 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'eff2304f-0e67-4c93-ae65-20d4ddb87625', 'name': 'tempest-ServerActionsTestJSON-server-348328150', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '55982930-937b-484e-96ee-69e406a48023'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b1bc217751704d588f690e1b293cade8', 'user_id': 'a7a79cf3930c41baa4cb453d75b59c70', 'hostId': 'eff85ee4b2160d8d47c77051713ec45837df0c11964f1f840b18cdf4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.171 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.171 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.171 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.171 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T18:59:15.171865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.179 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb / tap53ab68f2-68 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.179 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.186 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for c9937213-8842-4393-90b0-edb363037633 / tap2c007b4e-e6 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.187 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.193 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for eff2304f-0e67-4c93-ae65-20d4ddb87625 / tapb709b4ab-58 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.194 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.195 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.195 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.195 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.195 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.196 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.196 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.196 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.197 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.197 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.198 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.198 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.198 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.198 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.198 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.198 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.199 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.199 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.199 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-2083585917>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1449486284>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-348328150>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-2083585917>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1449486284>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-348328150>]
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.200 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.200 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.200 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.201 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.201 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T18:59:15.195191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T18:59:15.196836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T18:59:15.198183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T18:59:15.199528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T18:59:15.200644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T18:59:15.202106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.218 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.218 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.232 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.233 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.248 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.249 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.249 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.249 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.250 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.250 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.250 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.250 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.250 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.250 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.250 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.251 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.251 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.251 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.251 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.251 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T18:59:15.250226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T18:59:15.251593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.278 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/memory.usage volume: 40.4765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.304 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/memory.usage volume: 42.71875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.328 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.328 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance eff2304f-0e67-4c93-ae65-20d4ddb87625: ceilometer.compute.pollsters.NoVolumeException
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.329 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.330 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.330 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T18:59:15.329950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.331 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.332 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.333 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.333 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.333 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.334 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.334 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.335 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.336 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T18:59:15.332822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T18:59:15.337649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.378 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.378 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.424 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.read.bytes volume: 31017472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.425 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.479 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.480 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.481 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.483 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.483 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-2083585917>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1449486284>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-348328150>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-2083585917>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1449486284>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-348328150>]
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T18:59:15.482415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.484 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.484 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.484 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.484 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.484 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.485 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.read.latency volume: 1330646170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.485 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.read.latency volume: 1982669 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T18:59:15.484789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.486 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.read.latency volume: 1968048524 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.486 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.read.latency volume: 115551573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.487 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.read.latency volume: 1311198888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.487 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.read.latency volume: 2575523 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.489 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.490 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.490 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.491 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.read.requests volume: 1135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.491 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T18:59:15.489594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.492 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.492 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.493 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.494 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.495 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T18:59:15.494238) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.495 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.496 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.496 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.496 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.497 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.498 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.498 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.498 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.write.bytes volume: 72929280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.499 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.499 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.499 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.500 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.500 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T18:59:15.497783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.500 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.501 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.501 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.502 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.502 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.503 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.503 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.504 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.write.latency volume: 6790818053 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.504 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.504 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.504 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.505 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.506 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.506 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.507 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.write.requests volume: 270 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.507 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.507 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.508 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T18:59:15.501023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T18:59:15.503117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T18:59:15.506271) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.508 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.510 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.510 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.511 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.512 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.514 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.515 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.515 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T18:59:15.510292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T18:59:15.512848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.515 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T18:59:15.514911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.516 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.517 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.517 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.518 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.519 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.519 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.519 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.520 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.520 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.520 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.521 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.521 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.521 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.521 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T18:59:15.517399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T18:59:15.519656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T18:59:15.521850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.522 14 DEBUG ceilometer.compute.pollsters [-] 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb/cpu volume: 32330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.522 14 DEBUG ceilometer.compute.pollsters [-] c9937213-8842-4393-90b0-edb363037633/cpu volume: 35590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.523 14 DEBUG ceilometer.compute.pollsters [-] eff2304f-0e67-4c93-ae65-20d4ddb87625/cpu volume: 21060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 18:59:15.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]: {
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "osd_id": 1,
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "type": "bluestore"
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:    },
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "osd_id": 2,
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "type": "bluestore"
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:    },
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "osd_id": 0,
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:        "type": "bluestore"
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]:    }
Dec  3 18:59:15 compute-0 eloquent_beaver[446420]: }
Dec  3 18:59:15 compute-0 systemd[1]: libpod-ce0524f7719bca32df381a4213d606de06c6ddbf6b7b525750027fe9d717a1b5.scope: Deactivated successfully.
Dec  3 18:59:15 compute-0 systemd[1]: libpod-ce0524f7719bca32df381a4213d606de06c6ddbf6b7b525750027fe9d717a1b5.scope: Consumed 1.072s CPU time.
Dec  3 18:59:15 compute-0 podman[446453]: 2025-12-03 18:59:15.728036628 +0000 UTC m=+0.044077228 container died ce0524f7719bca32df381a4213d606de06c6ddbf6b7b525750027fe9d717a1b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 18:59:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 18:59:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f559a47691031d2fdabf185a42d474b202065fdc7a1ca29d55956dada4023088-merged.mount: Deactivated successfully.
Dec  3 18:59:15 compute-0 podman[446453]: 2025-12-03 18:59:15.974958636 +0000 UTC m=+0.290999226 container remove ce0524f7719bca32df381a4213d606de06c6ddbf6b7b525750027fe9d717a1b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 18:59:15 compute-0 systemd[1]: libpod-conmon-ce0524f7719bca32df381a4213d606de06c6ddbf6b7b525750027fe9d717a1b5.scope: Deactivated successfully.
Dec  3 18:59:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 18:59:16 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:59:16 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 18:59:16 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:59:16 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 803cf524-ab94-45e1-a989-37bce6513dd4 does not exist
Dec  3 18:59:16 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e2888bae-66b4-4166-99ac-4f00bca7a23a does not exist
Dec  3 18:59:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:59:16 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 18:59:17 compute-0 nova_compute[348325]: 2025-12-03 18:59:17.532 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:17 compute-0 nova_compute[348325]: 2025-12-03 18:59:17.669 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 273 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 816 KiB/s wr, 8 op/s
Dec  3 18:59:17 compute-0 podman[446518]: 2025-12-03 18:59:17.920129678 +0000 UTC m=+0.081583783 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Dec  3 18:59:17 compute-0 podman[446516]: 2025-12-03 18:59:17.943819336 +0000 UTC m=+0.107125686 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:59:17 compute-0 podman[446517]: 2025-12-03 18:59:17.947617739 +0000 UTC m=+0.107694020 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:59:18 compute-0 nova_compute[348325]: 2025-12-03 18:59:18.415 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:18 compute-0 ovn_controller[89305]: 2025-12-03T18:59:18Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a6:0c:ea 10.100.0.3
Dec  3 18:59:18 compute-0 ovn_controller[89305]: 2025-12-03T18:59:18Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a6:0c:ea 10.100.0.3
Dec  3 18:59:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec  3 18:59:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec  3 18:59:18 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec  3 18:59:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:19 compute-0 nova_compute[348325]: 2025-12-03 18:59:19.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:19 compute-0 nova_compute[348325]: 2025-12-03 18:59:19.558 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 281 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 1.5 MiB/s wr, 33 op/s
Dec  3 18:59:20 compute-0 nova_compute[348325]: 2025-12-03 18:59:20.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:20 compute-0 nova_compute[348325]: 2025-12-03 18:59:20.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:20 compute-0 nova_compute[348325]: 2025-12-03 18:59:20.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:20 compute-0 nova_compute[348325]: 2025-12-03 18:59:20.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 18:59:20 compute-0 nova_compute[348325]: 2025-12-03 18:59:20.500 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 18:59:21 compute-0 nova_compute[348325]: 2025-12-03 18:59:21.500 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 305 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 448 KiB/s rd, 3.7 MiB/s wr, 84 op/s
Dec  3 18:59:21 compute-0 podman[446578]: 2025-12-03 18:59:21.932930421 +0000 UTC m=+0.097085151 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm)
Dec  3 18:59:21 compute-0 podman[446577]: 2025-12-03 18:59:21.947803454 +0000 UTC m=+0.117129491 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9, architecture=x86_64, release=1214.1726694543, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container)
Dec  3 18:59:21 compute-0 podman[446579]: 2025-12-03 18:59:21.952108759 +0000 UTC m=+0.110280074 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 18:59:22 compute-0 nova_compute[348325]: 2025-12-03 18:59:22.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:22 compute-0 nova_compute[348325]: 2025-12-03 18:59:22.672 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:23.355 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:23.356 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:23.357 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:23 compute-0 nova_compute[348325]: 2025-12-03 18:59:23.418 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 315 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 483 KiB/s rd, 4.6 MiB/s wr, 98 op/s
Dec  3 18:59:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:24 compute-0 nova_compute[348325]: 2025-12-03 18:59:24.498 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:24 compute-0 nova_compute[348325]: 2025-12-03 18:59:24.499 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022773740191769226 of space, bias 1.0, pg target 0.6832122057530767 quantized to 32 (current 32)
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 18:59:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 18:59:24 compute-0 nova_compute[348325]: 2025-12-03 18:59:24.963 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:59:24 compute-0 nova_compute[348325]: 2025-12-03 18:59:24.964 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:59:24 compute-0 nova_compute[348325]: 2025-12-03 18:59:24.964 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 18:59:25 compute-0 nova_compute[348325]: 2025-12-03 18:59:25.043 348329 DEBUG nova.objects.instance [None req-d9e1bcea-2218-4d85-8681-16e04410c04c 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lazy-loading 'flavor' on Instance uuid c9937213-8842-4393-90b0-edb363037633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:59:25 compute-0 nova_compute[348325]: 2025-12-03 18:59:25.079 348329 DEBUG oslo_concurrency.lockutils [None req-d9e1bcea-2218-4d85-8681-16e04410c04c 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquiring lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:59:25 compute-0 ovn_controller[89305]: 2025-12-03T18:59:25Z|00127|binding|INFO|Releasing lport b52268a2-5f2a-45ba-8c23-e32c70c8253f from this chassis (sb_readonly=0)
Dec  3 18:59:25 compute-0 ovn_controller[89305]: 2025-12-03T18:59:25Z|00128|binding|INFO|Releasing lport 230273f0-8290-4d7b-8f3b-1217ad9086fb from this chassis (sb_readonly=0)
Dec  3 18:59:25 compute-0 ovn_controller[89305]: 2025-12-03T18:59:25Z|00129|binding|INFO|Releasing lport a490a544-649c-430c-bdd4-7e78ebd7f7b9 from this chassis (sb_readonly=0)
Dec  3 18:59:25 compute-0 nova_compute[348325]: 2025-12-03 18:59:25.601 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 315 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 484 KiB/s rd, 4.6 MiB/s wr, 99 op/s
Dec  3 18:59:27 compute-0 nova_compute[348325]: 2025-12-03 18:59:27.678 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 315 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 465 KiB/s rd, 3.6 MiB/s wr, 89 op/s
Dec  3 18:59:27 compute-0 ovn_controller[89305]: 2025-12-03T18:59:27Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:88:19 10.100.0.3
Dec  3 18:59:28 compute-0 nova_compute[348325]: 2025-12-03 18:59:28.119 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:28 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:28.120 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:59:28 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:28.121 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 18:59:28 compute-0 nova_compute[348325]: 2025-12-03 18:59:28.192 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] Updating instance_info_cache with network_info: [{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:59:28 compute-0 nova_compute[348325]: 2025-12-03 18:59:28.210 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:59:28 compute-0 nova_compute[348325]: 2025-12-03 18:59:28.211 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 18:59:28 compute-0 nova_compute[348325]: 2025-12-03 18:59:28.213 348329 DEBUG oslo_concurrency.lockutils [None req-d9e1bcea-2218-4d85-8681-16e04410c04c 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquired lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:59:28 compute-0 nova_compute[348325]: 2025-12-03 18:59:28.221 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:28 compute-0 nova_compute[348325]: 2025-12-03 18:59:28.419 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:28 compute-0 nova_compute[348325]: 2025-12-03 18:59:28.812 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:29 compute-0 nova_compute[348325]: 2025-12-03 18:59:29.210 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:29 compute-0 nova_compute[348325]: 2025-12-03 18:59:29.211 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:29 compute-0 nova_compute[348325]: 2025-12-03 18:59:29.228 348329 DEBUG nova.compute.manager [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 18:59:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:29 compute-0 nova_compute[348325]: 2025-12-03 18:59:29.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:29 compute-0 nova_compute[348325]: 2025-12-03 18:59:29.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 18:59:29 compute-0 nova_compute[348325]: 2025-12-03 18:59:29.579 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:29 compute-0 nova_compute[348325]: 2025-12-03 18:59:29.581 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:29 compute-0 nova_compute[348325]: 2025-12-03 18:59:29.595 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 18:59:29 compute-0 nova_compute[348325]: 2025-12-03 18:59:29.596 348329 INFO nova.compute.claims [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 18:59:29 compute-0 podman[158200]: time="2025-12-03T18:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:59:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 315 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 531 KiB/s rd, 2.9 MiB/s wr, 73 op/s
Dec  3 18:59:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46276 "" "Go-http-client/1.1"
Dec  3 18:59:29 compute-0 podman[158200]: @ - - [03/Dec/2025:18:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9583 "" "Go-http-client/1.1"
Dec  3 18:59:29 compute-0 nova_compute[348325]: 2025-12-03 18:59:29.861 348329 DEBUG nova.network.neutron [None req-d9e1bcea-2218-4d85-8681-16e04410c04c 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.116 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.192 348329 DEBUG nova.compute.manager [req-3814c102-3ca5-4789-84f7-65ed9f812486 req-ad88e5e3-0966-421f-b95e-3782cc1742f2 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Received event network-changed-2c007b4e-e674-4c1f-becb-67fc1b96681b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.192 348329 DEBUG nova.compute.manager [req-3814c102-3ca5-4789-84f7-65ed9f812486 req-ad88e5e3-0966-421f-b95e-3782cc1742f2 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Refreshing instance network info cache due to event network-changed-2c007b4e-e674-4c1f-becb-67fc1b96681b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.193 348329 DEBUG oslo_concurrency.lockutils [req-3814c102-3ca5-4789-84f7-65ed9f812486 req-ad88e5e3-0966-421f-b95e-3782cc1742f2 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:59:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:59:30 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272752050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.554 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.563 348329 DEBUG nova.compute.provider_tree [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.585 348329 DEBUG nova.scheduler.client.report [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.617 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.618 348329 DEBUG nova.compute.manager [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.906 348329 DEBUG nova.compute.manager [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.908 348329 DEBUG nova.network.neutron [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.927 348329 INFO nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 18:59:30 compute-0 nova_compute[348325]: 2025-12-03 18:59:30.952 348329 DEBUG nova.compute.manager [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 18:59:31 compute-0 nova_compute[348325]: 2025-12-03 18:59:31.052 348329 DEBUG nova.compute.manager [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 18:59:31 compute-0 nova_compute[348325]: 2025-12-03 18:59:31.054 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 18:59:31 compute-0 nova_compute[348325]: 2025-12-03 18:59:31.055 348329 INFO nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Creating image(s)#033[00m
Dec  3 18:59:31 compute-0 nova_compute[348325]: 2025-12-03 18:59:31.094 348329 DEBUG nova.storage.rbd_utils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:59:31 compute-0 nova_compute[348325]: 2025-12-03 18:59:31.137 348329 DEBUG nova.storage.rbd_utils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:59:31 compute-0 nova_compute[348325]: 2025-12-03 18:59:31.182 348329 DEBUG nova.storage.rbd_utils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:59:31 compute-0 nova_compute[348325]: 2025-12-03 18:59:31.192 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "fef3ab1a1bec0408321642c1c8701f42866ad11c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:31 compute-0 nova_compute[348325]: 2025-12-03 18:59:31.193 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "fef3ab1a1bec0408321642c1c8701f42866ad11c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:31 compute-0 nova_compute[348325]: 2025-12-03 18:59:31.204 348329 DEBUG nova.policy [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 18:59:31 compute-0 openstack_network_exporter[365222]: ERROR   18:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 18:59:31 compute-0 openstack_network_exporter[365222]: ERROR   18:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:59:31 compute-0 openstack_network_exporter[365222]: ERROR   18:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 18:59:31 compute-0 openstack_network_exporter[365222]: ERROR   18:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 18:59:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:59:31 compute-0 openstack_network_exporter[365222]: ERROR   18:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 18:59:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 18:59:31 compute-0 nova_compute[348325]: 2025-12-03 18:59:31.599 348329 DEBUG nova.virt.libvirt.imagebackend [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Image locations are: [{'url': 'rbd://c1caf3ba-b2a5-5005-a11e-e955c344dccc/images/29e9e995-880d-46f8-bdd0-149d4e107ea9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c1caf3ba-b2a5-5005-a11e-e955c344dccc/images/29e9e995-880d-46f8-bdd0-149d4e107ea9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  3 18:59:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 315 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 746 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Dec  3 18:59:31 compute-0 nova_compute[348325]: 2025-12-03 18:59:31.887 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.655 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.656 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.656 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.656 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.656 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.684 348329 DEBUG nova.network.neutron [None req-d9e1bcea-2218-4d85-8681-16e04410c04c 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Updating instance_info_cache with network_info: [{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.687 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.701 348329 DEBUG oslo_concurrency.lockutils [None req-d9e1bcea-2218-4d85-8681-16e04410c04c 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Releasing lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.701 348329 DEBUG nova.compute.manager [None req-d9e1bcea-2218-4d85-8681-16e04410c04c 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.701 348329 DEBUG nova.compute.manager [None req-d9e1bcea-2218-4d85-8681-16e04410c04c 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] network_info to inject: |[{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.704 348329 DEBUG oslo_concurrency.lockutils [req-3814c102-3ca5-4789-84f7-65ed9f812486 req-ad88e5e3-0966-421f-b95e-3782cc1742f2 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.704 348329 DEBUG nova.network.neutron [req-3814c102-3ca5-4789-84f7-65ed9f812486 req-ad88e5e3-0966-421f-b95e-3782cc1742f2 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Refreshing network info cache for port 2c007b4e-e674-4c1f-becb-67fc1b96681b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.810 348329 DEBUG nova.network.neutron [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Successfully created port: cf729fa8-9549-4bf2-9858-7e8de773e1bc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.935 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.996 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.997 348329 DEBUG nova.virt.images [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] 29e9e995-880d-46f8-bdd0-149d4e107ea9 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.997 348329 DEBUG nova.privsep.utils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  3 18:59:32 compute-0 nova_compute[348325]: 2025-12-03 18:59:32.998 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c.part /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:59:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1423109934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.172 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.242 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c.part /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c.converted" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.246 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.308 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.308 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.313 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.313 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.319 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.319 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.323 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c.converted --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.324 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "fef3ab1a1bec0408321642c1c8701f42866ad11c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:33 compute-0 podman[446742]: 2025-12-03 18:59:33.346858994 +0000 UTC m=+0.108881919 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.354 348329 DEBUG nova.storage.rbd_utils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.359 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.421 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 315 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 805 KiB/s wr, 58 op/s
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.790 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:33 compute-0 nova_compute[348325]: 2025-12-03 18:59:33.889 348329 DEBUG nova.storage.rbd_utils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] resizing rbd image a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.042 348329 DEBUG nova.objects.instance [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lazy-loading 'migration_context' on Instance uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.045 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.046 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3431MB free_disk=59.851531982421875GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.046 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.046 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.058 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.058 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Ensure instance console log exists: /var/lib/nova/instances/a4fc45c7-44e4-4b50-a3e0-98de13268f88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.059 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.059 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.059 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.151 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance eff2304f-0e67-4c93-ae65-20d4ddb87625 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.151 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance c9937213-8842-4393-90b0-edb363037633 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.152 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.152 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.152 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.153 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.284 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.522 348329 DEBUG nova.objects.instance [None req-806a0b8e-4e78-4688-aefb-22d85dfbc992 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lazy-loading 'flavor' on Instance uuid c9937213-8842-4393-90b0-edb363037633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.563 348329 DEBUG oslo_concurrency.lockutils [None req-806a0b8e-4e78-4688-aefb-22d85dfbc992 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquiring lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:59:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:59:34 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1266491663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.764 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.777 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.793 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.815 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 18:59:34 compute-0 nova_compute[348325]: 2025-12-03 18:59:34.816 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:35 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:35.124 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:59:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 333 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 688 KiB/s wr, 58 op/s
Dec  3 18:59:35 compute-0 podman[446901]: 2025-12-03 18:59:35.947248643 +0000 UTC m=+0.093231727 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 18:59:36 compute-0 podman[446900]: 2025-12-03 18:59:36.013727456 +0000 UTC m=+0.163826161 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Dec  3 18:59:36 compute-0 nova_compute[348325]: 2025-12-03 18:59:36.042 348329 DEBUG nova.network.neutron [req-3814c102-3ca5-4789-84f7-65ed9f812486 req-ad88e5e3-0966-421f-b95e-3782cc1742f2 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Updated VIF entry in instance network info cache for port 2c007b4e-e674-4c1f-becb-67fc1b96681b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:59:36 compute-0 nova_compute[348325]: 2025-12-03 18:59:36.042 348329 DEBUG nova.network.neutron [req-3814c102-3ca5-4789-84f7-65ed9f812486 req-ad88e5e3-0966-421f-b95e-3782cc1742f2 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Updating instance_info_cache with network_info: [{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:59:36 compute-0 nova_compute[348325]: 2025-12-03 18:59:36.087 348329 DEBUG oslo_concurrency.lockutils [req-3814c102-3ca5-4789-84f7-65ed9f812486 req-ad88e5e3-0966-421f-b95e-3782cc1742f2 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:59:36 compute-0 nova_compute[348325]: 2025-12-03 18:59:36.088 348329 DEBUG oslo_concurrency.lockutils [None req-806a0b8e-4e78-4688-aefb-22d85dfbc992 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquired lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:59:36 compute-0 nova_compute[348325]: 2025-12-03 18:59:36.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 18:59:36 compute-0 nova_compute[348325]: 2025-12-03 18:59:36.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 18:59:37 compute-0 nova_compute[348325]: 2025-12-03 18:59:37.690 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 348 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 64 op/s
Dec  3 18:59:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 18:59:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1063364189' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 18:59:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 18:59:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1063364189' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 18:59:38 compute-0 nova_compute[348325]: 2025-12-03 18:59:38.022 348329 DEBUG nova.network.neutron [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Successfully updated port: cf729fa8-9549-4bf2-9858-7e8de773e1bc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 18:59:38 compute-0 nova_compute[348325]: 2025-12-03 18:59:38.050 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:59:38 compute-0 nova_compute[348325]: 2025-12-03 18:59:38.050 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquired lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:59:38 compute-0 nova_compute[348325]: 2025-12-03 18:59:38.051 348329 DEBUG nova.network.neutron [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:59:38 compute-0 nova_compute[348325]: 2025-12-03 18:59:38.104 348329 DEBUG nova.compute.manager [req-90ef5102-9662-4b81-97cf-95fe7174074e req-c6f59114-02f0-4c4b-b818-7223fcbab23e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Received event network-changed-cf729fa8-9549-4bf2-9858-7e8de773e1bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:38 compute-0 nova_compute[348325]: 2025-12-03 18:59:38.105 348329 DEBUG nova.compute.manager [req-90ef5102-9662-4b81-97cf-95fe7174074e req-c6f59114-02f0-4c4b-b818-7223fcbab23e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Refreshing instance network info cache due to event network-changed-cf729fa8-9549-4bf2-9858-7e8de773e1bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:59:38 compute-0 nova_compute[348325]: 2025-12-03 18:59:38.105 348329 DEBUG oslo_concurrency.lockutils [req-90ef5102-9662-4b81-97cf-95fe7174074e req-c6f59114-02f0-4c4b-b818-7223fcbab23e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:59:38 compute-0 nova_compute[348325]: 2025-12-03 18:59:38.423 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:39 compute-0 nova_compute[348325]: 2025-12-03 18:59:39.044 348329 DEBUG nova.network.neutron [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 18:59:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 364 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Dec  3 18:59:41 compute-0 nova_compute[348325]: 2025-12-03 18:59:41.042 348329 DEBUG nova.network.neutron [None req-806a0b8e-4e78-4688-aefb-22d85dfbc992 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 18:59:41 compute-0 nova_compute[348325]: 2025-12-03 18:59:41.115 348329 DEBUG nova.compute.manager [req-010e6abc-76a2-40cf-8f89-591834a06b38 req-7057bd03-7d44-4932-b162-faed20e2d415 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Received event network-changed-2c007b4e-e674-4c1f-becb-67fc1b96681b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:41 compute-0 nova_compute[348325]: 2025-12-03 18:59:41.116 348329 DEBUG nova.compute.manager [req-010e6abc-76a2-40cf-8f89-591834a06b38 req-7057bd03-7d44-4932-b162-faed20e2d415 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Refreshing instance network info cache due to event network-changed-2c007b4e-e674-4c1f-becb-67fc1b96681b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 18:59:41 compute-0 nova_compute[348325]: 2025-12-03 18:59:41.116 348329 DEBUG oslo_concurrency.lockutils [req-010e6abc-76a2-40cf-8f89-591834a06b38 req-7057bd03-7d44-4932-b162-faed20e2d415 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 18:59:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 364 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.613 348329 DEBUG nova.network.neutron [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updating instance_info_cache with network_info: [{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.661 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Releasing lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.662 348329 DEBUG nova.compute.manager [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Instance network_info: |[{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.663 348329 DEBUG oslo_concurrency.lockutils [req-90ef5102-9662-4b81-97cf-95fe7174074e req-c6f59114-02f0-4c4b-b818-7223fcbab23e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.664 348329 DEBUG nova.network.neutron [req-90ef5102-9662-4b81-97cf-95fe7174074e req-c6f59114-02f0-4c4b-b818-7223fcbab23e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Refreshing network info cache for port cf729fa8-9549-4bf2-9858-7e8de773e1bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.670 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Start _get_guest_xml network_info=[{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:59:15Z,direct_url=<?>,disk_format='qcow2',id=29e9e995-880d-46f8-bdd0-149d4e107ea9,min_disk=0,min_ram=0,name='tempest-scenario-img--508019753',owner='d29cef7b24ee4d30b2b3f5027ec6aafb',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:59:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.680 348329 WARNING nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.695 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.712 348329 DEBUG nova.virt.libvirt.host [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.713 348329 DEBUG nova.virt.libvirt.host [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.721 348329 DEBUG nova.virt.libvirt.host [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.722 348329 DEBUG nova.virt.libvirt.host [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.722 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.723 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:56:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a94cfbfb-a20a-4689-ac91-e7436db75880',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:59:15Z,direct_url=<?>,disk_format='qcow2',id=29e9e995-880d-46f8-bdd0-149d4e107ea9,min_disk=0,min_ram=0,name='tempest-scenario-img--508019753',owner='d29cef7b24ee4d30b2b3f5027ec6aafb',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:59:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.724 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.724 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.724 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.725 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.725 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.726 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.726 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.727 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.727 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.728 348329 DEBUG nova.virt.hardware [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 18:59:42 compute-0 nova_compute[348325]: 2025-12-03 18:59:42.730 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:59:43 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238475738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.223 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.253 348329 DEBUG nova.storage.rbd_utils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.259 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.425 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 18:59:43 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4048439295' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.723 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.727 348329 DEBUG nova.virt.libvirt.vif [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:59:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya',id=12,image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d721c97c-b9eb-44f9-a826-1b99239b172a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d29cef7b24ee4d30b2b3f5027ec6aafb',ramdisk_id='',reservation_id='r-58mkdhyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-463817161',owner_user_name='tempest-PrometheusGabbiTest-463817161-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:59:30Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5b5e6c2a7cce4e3b96611203def80123',uuid=a4fc45c7-44e4-4b50-a3e0-98de13268f88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.729 348329 DEBUG nova.network.os_vif_util [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converting VIF {"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.731 348329 DEBUG nova.network.os_vif_util [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=cf729fa8-9549-4bf2-9858-7e8de773e1bc,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf729fa8-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.733 348329 DEBUG nova.objects.instance [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lazy-loading 'pci_devices' on Instance uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.766 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] End _get_guest_xml xml=<domain type="kvm">
Dec  3 18:59:43 compute-0 nova_compute[348325]:  <uuid>a4fc45c7-44e4-4b50-a3e0-98de13268f88</uuid>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  <name>instance-0000000c</name>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  <memory>131072</memory>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  <metadata>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <nova:name>te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya</nova:name>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 18:59:42</nova:creationTime>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <nova:flavor name="m1.nano">
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <nova:memory>128</nova:memory>
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <nova:user uuid="5b5e6c2a7cce4e3b96611203def80123">tempest-PrometheusGabbiTest-463817161-project-member</nova:user>
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <nova:project uuid="d29cef7b24ee4d30b2b3f5027ec6aafb">tempest-PrometheusGabbiTest-463817161</nova:project>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="29e9e995-880d-46f8-bdd0-149d4e107ea9"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <nova:port uuid="cf729fa8-9549-4bf2-9858-7e8de773e1bc">
Dec  3 18:59:43 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="10.100.3.160" ipVersion="4"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  </metadata>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <system>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <entry name="serial">a4fc45c7-44e4-4b50-a3e0-98de13268f88</entry>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <entry name="uuid">a4fc45c7-44e4-4b50-a3e0-98de13268f88</entry>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    </system>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  <os>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  </os>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  <features>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <apic/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  </features>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  </clock>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  </cpu>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  <devices>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk">
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      </source>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk.config">
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      </source>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 18:59:43 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      </auth>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    </disk>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:8d:91:4c"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <target dev="tapcf729fa8-95"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    </interface>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/a4fc45c7-44e4-4b50-a3e0-98de13268f88/console.log" append="off"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    </serial>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <video>
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    </video>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    </rng>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 18:59:43 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 18:59:43 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 18:59:43 compute-0 nova_compute[348325]:  </devices>
Dec  3 18:59:43 compute-0 nova_compute[348325]: </domain>
Dec  3 18:59:43 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.768 348329 DEBUG nova.compute.manager [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Preparing to wait for external event network-vif-plugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.768 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.769 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.770 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 364 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.773 348329 DEBUG nova.virt.libvirt.vif [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T18:59:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya',id=12,image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d721c97c-b9eb-44f9-a826-1b99239b172a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d29cef7b24ee4d30b2b3f5027ec6aafb',ramdisk_id='',reservation_id='r-58mkdhyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-463817161',owner_user_name='tempest-PrometheusGabbiTest-463817161-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T18:59:30Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5b5e6c2a7cce4e3b96611203def80123',uuid=a4fc45c7-44e4-4b50-a3e0-98de13268f88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.773 348329 DEBUG nova.network.os_vif_util [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converting VIF {"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.774 348329 DEBUG nova.network.os_vif_util [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=cf729fa8-9549-4bf2-9858-7e8de773e1bc,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf729fa8-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.776 348329 DEBUG os_vif [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=cf729fa8-9549-4bf2-9858-7e8de773e1bc,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf729fa8-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.777 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.778 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.778 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.788 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.789 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf729fa8-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.790 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf729fa8-95, col_values=(('external_ids', {'iface-id': 'cf729fa8-9549-4bf2-9858-7e8de773e1bc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:91:4c', 'vm-uuid': 'a4fc45c7-44e4-4b50-a3e0-98de13268f88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.792 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:43 compute-0 NetworkManager[49087]: <info>  [1764788383.7957] manager: (tapcf729fa8-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.801 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.804 348329 INFO os_vif [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=cf729fa8-9549-4bf2-9858-7e8de773e1bc,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf729fa8-95')#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.880 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.881 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.882 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] No VIF found with MAC fa:16:3e:8d:91:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.883 348329 INFO nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Using config drive#033[00m
Dec  3 18:59:43 compute-0 nova_compute[348325]: 2025-12-03 18:59:43.943 348329 DEBUG nova.storage.rbd_utils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:59:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:59:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:59:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:59:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:59:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 18:59:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 18:59:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:45 compute-0 nova_compute[348325]: 2025-12-03 18:59:45.129 348329 INFO nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Creating config drive at /var/lib/nova/instances/a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.config#033[00m
Dec  3 18:59:45 compute-0 nova_compute[348325]: 2025-12-03 18:59:45.134 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps2okhw1v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:45 compute-0 nova_compute[348325]: 2025-12-03 18:59:45.284 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps2okhw1v" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:45 compute-0 nova_compute[348325]: 2025-12-03 18:59:45.324 348329 DEBUG nova.storage.rbd_utils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 18:59:45 compute-0 nova_compute[348325]: 2025-12-03 18:59:45.333 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.config a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:45 compute-0 nova_compute[348325]: 2025-12-03 18:59:45.598 348329 DEBUG oslo_concurrency.processutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.config a4fc45c7-44e4-4b50-a3e0-98de13268f88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:45 compute-0 nova_compute[348325]: 2025-12-03 18:59:45.599 348329 INFO nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Deleting local config drive /var/lib/nova/instances/a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.config because it was imported into RBD.#033[00m
Dec  3 18:59:45 compute-0 kernel: tapcf729fa8-95: entered promiscuous mode
Dec  3 18:59:45 compute-0 NetworkManager[49087]: <info>  [1764788385.6817] manager: (tapcf729fa8-95): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Dec  3 18:59:45 compute-0 nova_compute[348325]: 2025-12-03 18:59:45.685 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:45 compute-0 ovn_controller[89305]: 2025-12-03T18:59:45Z|00130|binding|INFO|Claiming lport cf729fa8-9549-4bf2-9858-7e8de773e1bc for this chassis.
Dec  3 18:59:45 compute-0 ovn_controller[89305]: 2025-12-03T18:59:45Z|00131|binding|INFO|cf729fa8-9549-4bf2-9858-7e8de773e1bc: Claiming fa:16:3e:8d:91:4c 10.100.3.160
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.696 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:91:4c 10.100.3.160'], port_security=['fa:16:3e:8d:91:4c 10.100.3.160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.160/16', 'neutron:device_id': 'a4fc45c7-44e4-4b50-a3e0-98de13268f88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04e258c0-609e-4010-a306-af20506c3a9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e47f9e7-514d-4fc2-9225-d05512482dee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b71f2b6d-7f9c-430c-a162-af2bdc131d68, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=cf729fa8-9549-4bf2-9858-7e8de773e1bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.697 286999 INFO neutron.agent.ovn.metadata.agent [-] Port cf729fa8-9549-4bf2-9858-7e8de773e1bc in datapath 04e258c0-609e-4010-a306-af20506c3a9d bound to our chassis#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.700 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 04e258c0-609e-4010-a306-af20506c3a9d#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.714 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[00881653-2d2a-45dc-a8ed-d12f2cd72de0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.715 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap04e258c0-61 in ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.719 411759 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap04e258c0-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.719 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[da4e94c4-8d4b-47d2-a50f-518e7f7adc5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.722 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d05cb8-1c36-4d2f-a85f-a06f4e9e3d26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.735 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7a0dff-d166-41ea-9765-a2f9a45626d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:45 compute-0 systemd-machined[138702]: New machine qemu-13-instance-0000000c.
Dec  3 18:59:45 compute-0 nova_compute[348325]: 2025-12-03 18:59:45.747 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:45 compute-0 ovn_controller[89305]: 2025-12-03T18:59:45Z|00132|binding|INFO|Setting lport cf729fa8-9549-4bf2-9858-7e8de773e1bc ovn-installed in OVS
Dec  3 18:59:45 compute-0 ovn_controller[89305]: 2025-12-03T18:59:45Z|00133|binding|INFO|Setting lport cf729fa8-9549-4bf2-9858-7e8de773e1bc up in Southbound
Dec  3 18:59:45 compute-0 nova_compute[348325]: 2025-12-03 18:59:45.750 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:45 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.767 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[598bbefa-6b53-4725-a3e0-90d9985580a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 364 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec  3 18:59:45 compute-0 systemd-udevd[447081]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 18:59:45 compute-0 NetworkManager[49087]: <info>  [1764788385.7994] device (tapcf729fa8-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 18:59:45 compute-0 NetworkManager[49087]: <info>  [1764788385.8010] device (tapcf729fa8-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.820 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[a0299597-b3c7-45a3-a758-b3e74d3ce0e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.829 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[3fee529f-ea42-4500-9afb-e9735e773172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:45 compute-0 NetworkManager[49087]: <info>  [1764788385.8310] manager: (tap04e258c0-60): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.871 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[5627f1e0-b4d9-49b3-b6bc-03d8517356c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.876 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[1b7a9228-df9b-4823-9b1e-37aff7542f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:45 compute-0 NetworkManager[49087]: <info>  [1764788385.9149] device (tap04e258c0-60): carrier: link connected
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.925 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[3797da88-6296-4182-bacb-41cf20bcbfb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.951 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f88c6c39-dcb8-4ccc-9e1a-e5217add32a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap04e258c0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5b:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666585, 'reachable_time': 32370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 447111, 'error': None, 'target': 'ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:45 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.973 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f3fe121e-5295-4e4d-b832-bc3854248585]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:5b40'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666585, 'tstamp': 666585}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 447112, 'error': None, 'target': 'ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:45.998 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[88e4daf0-ac9a-4c3e-b18e-6038ae5e7f21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap04e258c0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5b:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666585, 'reachable_time': 32370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 447113, 'error': None, 'target': 'ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.046 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[6d99ffe3-b692-47a9-8f5c-f28ec0858751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.059 348329 DEBUG nova.network.neutron [req-90ef5102-9662-4b81-97cf-95fe7174074e req-c6f59114-02f0-4c4b-b818-7223fcbab23e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updated VIF entry in instance network info cache for port cf729fa8-9549-4bf2-9858-7e8de773e1bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.060 348329 DEBUG nova.network.neutron [req-90ef5102-9662-4b81-97cf-95fe7174074e req-c6f59114-02f0-4c4b-b818-7223fcbab23e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updating instance_info_cache with network_info: [{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.067 348329 DEBUG nova.compute.manager [req-d228ba79-a629-45d8-84d9-5ad7590ccc44 req-2ee214ac-2653-45fd-83ca-28631dab5e29 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Received event network-vif-plugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.067 348329 DEBUG oslo_concurrency.lockutils [req-d228ba79-a629-45d8-84d9-5ad7590ccc44 req-2ee214ac-2653-45fd-83ca-28631dab5e29 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.068 348329 DEBUG oslo_concurrency.lockutils [req-d228ba79-a629-45d8-84d9-5ad7590ccc44 req-2ee214ac-2653-45fd-83ca-28631dab5e29 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.068 348329 DEBUG oslo_concurrency.lockutils [req-d228ba79-a629-45d8-84d9-5ad7590ccc44 req-2ee214ac-2653-45fd-83ca-28631dab5e29 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.068 348329 DEBUG nova.compute.manager [req-d228ba79-a629-45d8-84d9-5ad7590ccc44 req-2ee214ac-2653-45fd-83ca-28631dab5e29 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Processing event network-vif-plugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.077 348329 DEBUG oslo_concurrency.lockutils [req-90ef5102-9662-4b81-97cf-95fe7174074e req-c6f59114-02f0-4c4b-b818-7223fcbab23e 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.084 348329 DEBUG nova.network.neutron [None req-806a0b8e-4e78-4688-aefb-22d85dfbc992 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Updating instance_info_cache with network_info: [{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.103 348329 DEBUG oslo_concurrency.lockutils [None req-806a0b8e-4e78-4688-aefb-22d85dfbc992 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Releasing lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.104 348329 DEBUG nova.compute.manager [None req-806a0b8e-4e78-4688-aefb-22d85dfbc992 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.104 348329 DEBUG nova.compute.manager [None req-806a0b8e-4e78-4688-aefb-22d85dfbc992 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] network_info to inject: |[{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.107 348329 DEBUG oslo_concurrency.lockutils [req-010e6abc-76a2-40cf-8f89-591834a06b38 req-7057bd03-7d44-4932-b162-faed20e2d415 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.107 348329 DEBUG nova.network.neutron [req-010e6abc-76a2-40cf-8f89-591834a06b38 req-7057bd03-7d44-4932-b162-faed20e2d415 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Refreshing network info cache for port 2c007b4e-e674-4c1f-becb-67fc1b96681b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.151 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0862a8-25f5-4349-8ff4-d0087fe8b872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.153 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04e258c0-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.153 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.153 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap04e258c0-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.155 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 NetworkManager[49087]: <info>  [1764788386.1565] manager: (tap04e258c0-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Dec  3 18:59:46 compute-0 kernel: tap04e258c0-60: entered promiscuous mode
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.163 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.165 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap04e258c0-60, col_values=(('external_ids', {'iface-id': 'f82febe8-1e88-4e67-9f7a-5af5921c9877'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:59:46 compute-0 ovn_controller[89305]: 2025-12-03T18:59:46Z|00134|binding|INFO|Releasing lport f82febe8-1e88-4e67-9f7a-5af5921c9877 from this chassis (sb_readonly=0)
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.190 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.192 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.192 286999 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/04e258c0-609e-4010-a306-af20506c3a9d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/04e258c0-609e-4010-a306-af20506c3a9d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.194 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d16585-82d1-4cb0-a166-f528d971880a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.195 286999 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: global
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    log         /dev/log local0 debug
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    log-tag     haproxy-metadata-proxy-04e258c0-609e-4010-a306-af20506c3a9d
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    user        root
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    group       root
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    maxconn     1024
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    pidfile     /var/lib/neutron/external/pids/04e258c0-609e-4010-a306-af20506c3a9d.pid.haproxy
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    daemon
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: defaults
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    log global
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    mode http
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    option httplog
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    option dontlognull
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    option http-server-close
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    option forwardfor
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    retries                 3
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    timeout http-request    30s
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    timeout connect         30s
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    timeout client          32s
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    timeout server          32s
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    timeout http-keep-alive 30s
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: listen listener
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    bind 169.254.169.254:80
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]:    http-request add-header X-OVN-Network-ID 04e258c0-609e-4010-a306-af20506c3a9d
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.195 286999 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d', 'env', 'PROCESS_TAG=haproxy-04e258c0-609e-4010-a306-af20506c3a9d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/04e258c0-609e-4010-a306-af20506c3a9d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.324 348329 DEBUG oslo_concurrency.lockutils [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquiring lock "c9937213-8842-4393-90b0-edb363037633" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.324 348329 DEBUG oslo_concurrency.lockutils [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "c9937213-8842-4393-90b0-edb363037633" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.324 348329 DEBUG oslo_concurrency.lockutils [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquiring lock "c9937213-8842-4393-90b0-edb363037633-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.325 348329 DEBUG oslo_concurrency.lockutils [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "c9937213-8842-4393-90b0-edb363037633-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.325 348329 DEBUG oslo_concurrency.lockutils [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "c9937213-8842-4393-90b0-edb363037633-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.326 348329 INFO nova.compute.manager [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Terminating instance#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.327 348329 DEBUG nova.compute.manager [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 18:59:46 compute-0 kernel: tap2c007b4e-e6 (unregistering): left promiscuous mode
Dec  3 18:59:46 compute-0 NetworkManager[49087]: <info>  [1764788386.4308] device (tap2c007b4e-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.444 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 ovn_controller[89305]: 2025-12-03T18:59:46Z|00135|binding|INFO|Releasing lport 2c007b4e-e674-4c1f-becb-67fc1b96681b from this chassis (sb_readonly=0)
Dec  3 18:59:46 compute-0 ovn_controller[89305]: 2025-12-03T18:59:46Z|00136|binding|INFO|Setting lport 2c007b4e-e674-4c1f-becb-67fc1b96681b down in Southbound
Dec  3 18:59:46 compute-0 ovn_controller[89305]: 2025-12-03T18:59:46Z|00137|binding|INFO|Removing iface tap2c007b4e-e6 ovn-installed in OVS
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.448 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.454 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:33:2c 10.100.0.11'], port_security=['fa:16:3e:7c:33:2c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c9937213-8842-4393-90b0-edb363037633', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d518f3f9-88f0-4dc2-8769-17ebdac41174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82b2746c38174502bdcb70a8ab378edf', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5e73fa03-0484-41f3-9b8b-de18b4035c5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5f1938d-cee6-4d22-8bab-61d58d3ab44b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=2c007b4e-e674-4c1f-becb-67fc1b96681b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.463 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec  3 18:59:46 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 44.732s CPU time.
Dec  3 18:59:46 compute-0 systemd-machined[138702]: Machine qemu-10-instance-0000000a terminated.
Dec  3 18:59:46 compute-0 NetworkManager[49087]: <info>  [1764788386.5533] manager: (tap2c007b4e-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.555 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.564 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.578 348329 INFO nova.virt.libvirt.driver [-] [instance: c9937213-8842-4393-90b0-edb363037633] Instance destroyed successfully.#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.578 348329 DEBUG nova.objects.instance [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lazy-loading 'resources' on Instance uuid c9937213-8842-4393-90b0-edb363037633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.586 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788386.5865462, a4fc45c7-44e4-4b50-a3e0-98de13268f88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.587 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] VM Started (Lifecycle Event)#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.590 348329 DEBUG nova.compute.manager [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.597 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.602 348329 INFO nova.virt.libvirt.driver [-] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Instance spawned successfully.#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.602 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.613 348329 DEBUG nova.virt.libvirt.vif [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:57:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1449486284',display_name='tempest-AttachInterfacesUnderV243Test-server-1449486284',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1449486284',id=10,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDguLuuxXUluMpTAPvWre/y/zCYbb1KHFibFt+PZdnBzNC2rwnEZ8uO6YAoyvDMtumWTT1JVJ8FZld71I9MbTqHtLcUWMLdncY7IzScsLtvRuzNIOeN8N3ta9kELYuUrYw==',key_name='tempest-keypair-1738864924',keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:58:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='82b2746c38174502bdcb70a8ab378edf',ramdisk_id='',reservation_id='r-ngyn2v0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-699689894',owner_user_name='tempest-AttachInterfacesUnderV243Test-699689894-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:59:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='78734fd37e3f4665b1cb2cbcba2e9f65',uuid=c9937213-8842-4393-90b0-edb363037633,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.613 348329 DEBUG nova.network.os_vif_util [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Converting VIF {"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.614 348329 DEBUG nova.network.os_vif_util [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:33:2c,bridge_name='br-int',has_traffic_filtering=True,id=2c007b4e-e674-4c1f-becb-67fc1b96681b,network=Network(d518f3f9-88f0-4dc2-8769-17ebdac41174),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c007b4e-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.614 348329 DEBUG os_vif [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:33:2c,bridge_name='br-int',has_traffic_filtering=True,id=2c007b4e-e674-4c1f-becb-67fc1b96681b,network=Network(d518f3f9-88f0-4dc2-8769-17ebdac41174),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c007b4e-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.615 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.616 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c007b4e-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.618 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.621 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.622 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.628 348329 INFO os_vif [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:33:2c,bridge_name='br-int',has_traffic_filtering=True,id=2c007b4e-e674-4c1f-becb-67fc1b96681b,network=Network(d518f3f9-88f0-4dc2-8769-17ebdac41174),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c007b4e-e6')#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.656 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.664 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.664 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.665 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.665 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.665 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.666 348329 DEBUG nova.virt.libvirt.driver [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.680 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.681 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788386.5866566, a4fc45c7-44e4-4b50-a3e0-98de13268f88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.696 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] VM Paused (Lifecycle Event)#033[00m
Dec  3 18:59:46 compute-0 podman[447198]: 2025-12-03 18:59:46.707750702 +0000 UTC m=+0.082634478 container create 581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.725 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.736 348329 INFO nova.compute.manager [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Took 15.68 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.739 348329 DEBUG nova.compute.manager [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.742 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788386.5960033, a4fc45c7-44e4-4b50-a3e0-98de13268f88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.743 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] VM Resumed (Lifecycle Event)#033[00m
Dec  3 18:59:46 compute-0 systemd[1]: Started libpod-conmon-581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d.scope.
Dec  3 18:59:46 compute-0 podman[447198]: 2025-12-03 18:59:46.671857975 +0000 UTC m=+0.046741771 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.782 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.793 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 18:59:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 18:59:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04f4a59d30f163fc6f6daec1ca37a27c6f6fe3f3a8752326fdb2dfce0987373/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.822 348329 INFO nova.compute.manager [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Took 17.51 seconds to build instance.#033[00m
Dec  3 18:59:46 compute-0 podman[447198]: 2025-12-03 18:59:46.827477855 +0000 UTC m=+0.202361651 container init 581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  3 18:59:46 compute-0 podman[447198]: 2025-12-03 18:59:46.836711371 +0000 UTC m=+0.211595147 container start 581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.844 348329 DEBUG oslo_concurrency.lockutils [None req-4a1d15d6-2483-425c-9307-f8ac5cbd5d6d 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:46 compute-0 neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d[447231]: [NOTICE]   (447235) : New worker (447237) forked
Dec  3 18:59:46 compute-0 neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d[447231]: [NOTICE]   (447235) : Loading success.
Dec  3 18:59:46 compute-0 ovn_controller[89305]: 2025-12-03T18:59:46Z|00138|binding|INFO|Releasing lport f82febe8-1e88-4e67-9f7a-5af5921c9877 from this chassis (sb_readonly=0)
Dec  3 18:59:46 compute-0 ovn_controller[89305]: 2025-12-03T18:59:46Z|00139|binding|INFO|Releasing lport b52268a2-5f2a-45ba-8c23-e32c70c8253f from this chassis (sb_readonly=0)
Dec  3 18:59:46 compute-0 ovn_controller[89305]: 2025-12-03T18:59:46Z|00140|binding|INFO|Releasing lport 230273f0-8290-4d7b-8f3b-1217ad9086fb from this chassis (sb_readonly=0)
Dec  3 18:59:46 compute-0 ovn_controller[89305]: 2025-12-03T18:59:46Z|00141|binding|INFO|Releasing lport a490a544-649c-430c-bdd4-7e78ebd7f7b9 from this chassis (sb_readonly=0)
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.923 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 2c007b4e-e674-4c1f-becb-67fc1b96681b in datapath d518f3f9-88f0-4dc2-8769-17ebdac41174 unbound from our chassis#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.932 286999 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d518f3f9-88f0-4dc2-8769-17ebdac41174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.934 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e59a7ae3-b52c-4964-959d-1e45ce0ad70f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:46 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:46.936 286999 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174 namespace which is not needed anymore#033[00m
Dec  3 18:59:46 compute-0 nova_compute[348325]: 2025-12-03 18:59:46.991 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:47 compute-0 neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174[444474]: [NOTICE]   (444478) : haproxy version is 2.8.14-c23fe91
Dec  3 18:59:47 compute-0 neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174[444474]: [NOTICE]   (444478) : path to executable is /usr/sbin/haproxy
Dec  3 18:59:47 compute-0 neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174[444474]: [ALERT]    (444478) : Current worker (444480) exited with code 143 (Terminated)
Dec  3 18:59:47 compute-0 neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174[444474]: [WARNING]  (444478) : All workers exited. Exiting... (0)
Dec  3 18:59:47 compute-0 systemd[1]: libpod-617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d.scope: Deactivated successfully.
Dec  3 18:59:47 compute-0 conmon[444474]: conmon 617bd8dde6600c8ba146 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d.scope/container/memory.events
Dec  3 18:59:47 compute-0 podman[447263]: 2025-12-03 18:59:47.131241151 +0000 UTC m=+0.063109822 container died 617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 18:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d-userdata-shm.mount: Deactivated successfully.
Dec  3 18:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-dacd48a497e518160f856eaab4ae0dead65ae64958a7718d3d16b4d47879d26a-merged.mount: Deactivated successfully.
Dec  3 18:59:47 compute-0 podman[447263]: 2025-12-03 18:59:47.206022028 +0000 UTC m=+0.137890689 container cleanup 617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 18:59:47 compute-0 systemd[1]: libpod-conmon-617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d.scope: Deactivated successfully.
Dec  3 18:59:47 compute-0 nova_compute[348325]: 2025-12-03 18:59:47.285 348329 INFO nova.virt.libvirt.driver [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Deleting instance files /var/lib/nova/instances/c9937213-8842-4393-90b0-edb363037633_del#033[00m
Dec  3 18:59:47 compute-0 nova_compute[348325]: 2025-12-03 18:59:47.286 348329 INFO nova.virt.libvirt.driver [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Deletion of /var/lib/nova/instances/c9937213-8842-4393-90b0-edb363037633_del complete#033[00m
Dec  3 18:59:47 compute-0 podman[447293]: 2025-12-03 18:59:47.303345093 +0000 UTC m=+0.067017427 container remove 617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  3 18:59:47 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:47.315 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[7b365057-01b6-4682-93fd-e32889e9b042]: (4, ('Wed Dec  3 06:59:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174 (617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d)\n617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d\nWed Dec  3 06:59:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174 (617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d)\n617bd8dde6600c8ba146a5636da73688797fbaf4cca7ec8826ea266ab8862f5d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:47 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:47.318 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[47e34e87-d560-4696-8278-0a82ea5da4da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:47 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:47.320 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd518f3f9-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:59:47 compute-0 nova_compute[348325]: 2025-12-03 18:59:47.323 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:47 compute-0 kernel: tapd518f3f9-80: left promiscuous mode
Dec  3 18:59:47 compute-0 nova_compute[348325]: 2025-12-03 18:59:47.348 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:47 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:47.354 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[413d2b02-c083-438e-8f86-731b319f5b9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:47 compute-0 nova_compute[348325]: 2025-12-03 18:59:47.361 348329 INFO nova.compute.manager [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 18:59:47 compute-0 nova_compute[348325]: 2025-12-03 18:59:47.362 348329 DEBUG oslo.service.loopingcall [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 18:59:47 compute-0 nova_compute[348325]: 2025-12-03 18:59:47.363 348329 DEBUG nova.compute.manager [-] [instance: c9937213-8842-4393-90b0-edb363037633] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 18:59:47 compute-0 nova_compute[348325]: 2025-12-03 18:59:47.363 348329 DEBUG nova.network.neutron [-] [instance: c9937213-8842-4393-90b0-edb363037633] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 18:59:47 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:47.372 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[d972d1c8-903b-4c82-adaf-c181980b79c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:47 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:47.374 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[c8671d16-53db-40b9-9d79-69d670faeb23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:47 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:47.400 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e6587574-fe42-491f-b65f-6956ebc364f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657087, 'reachable_time': 15483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 447308, 'error': None, 'target': 'ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:47 compute-0 systemd[1]: run-netns-ovnmeta\x2dd518f3f9\x2d88f0\x2d4dc2\x2d8769\x2d17ebdac41174.mount: Deactivated successfully.
Dec  3 18:59:47 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:47.404 287110 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d518f3f9-88f0-4dc2-8769-17ebdac41174 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 18:59:47 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:47.405 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[e38ab93b-1ce1-417b-829b-55a581d0eea9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 364 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 22 op/s
Dec  3 18:59:47 compute-0 nova_compute[348325]: 2025-12-03 18:59:47.829 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:48 compute-0 nova_compute[348325]: 2025-12-03 18:59:48.155 348329 DEBUG nova.compute.manager [req-f92f6b5d-9d3e-4900-a4a1-93c22106a285 req-36ca4e3c-370f-4ab9-9dc7-58b454e38747 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Received event network-vif-plugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:48 compute-0 nova_compute[348325]: 2025-12-03 18:59:48.155 348329 DEBUG oslo_concurrency.lockutils [req-f92f6b5d-9d3e-4900-a4a1-93c22106a285 req-36ca4e3c-370f-4ab9-9dc7-58b454e38747 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:48 compute-0 nova_compute[348325]: 2025-12-03 18:59:48.155 348329 DEBUG oslo_concurrency.lockutils [req-f92f6b5d-9d3e-4900-a4a1-93c22106a285 req-36ca4e3c-370f-4ab9-9dc7-58b454e38747 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:48 compute-0 nova_compute[348325]: 2025-12-03 18:59:48.156 348329 DEBUG oslo_concurrency.lockutils [req-f92f6b5d-9d3e-4900-a4a1-93c22106a285 req-36ca4e3c-370f-4ab9-9dc7-58b454e38747 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:48 compute-0 nova_compute[348325]: 2025-12-03 18:59:48.156 348329 DEBUG nova.compute.manager [req-f92f6b5d-9d3e-4900-a4a1-93c22106a285 req-36ca4e3c-370f-4ab9-9dc7-58b454e38747 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] No waiting events found dispatching network-vif-plugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:59:48 compute-0 nova_compute[348325]: 2025-12-03 18:59:48.156 348329 WARNING nova.compute.manager [req-f92f6b5d-9d3e-4900-a4a1-93c22106a285 req-36ca4e3c-370f-4ab9-9dc7-58b454e38747 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Received unexpected event network-vif-plugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc for instance with vm_state active and task_state None.#033[00m
Dec  3 18:59:48 compute-0 nova_compute[348325]: 2025-12-03 18:59:48.169 348329 DEBUG nova.network.neutron [req-010e6abc-76a2-40cf-8f89-591834a06b38 req-7057bd03-7d44-4932-b162-faed20e2d415 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Updated VIF entry in instance network info cache for port 2c007b4e-e674-4c1f-becb-67fc1b96681b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 18:59:48 compute-0 nova_compute[348325]: 2025-12-03 18:59:48.169 348329 DEBUG nova.network.neutron [req-010e6abc-76a2-40cf-8f89-591834a06b38 req-7057bd03-7d44-4932-b162-faed20e2d415 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Updating instance_info_cache with network_info: [{"id": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "address": "fa:16:3e:7c:33:2c", "network": {"id": "d518f3f9-88f0-4dc2-8769-17ebdac41174", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-282035089-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82b2746c38174502bdcb70a8ab378edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c007b4e-e6", "ovs_interfaceid": "2c007b4e-e674-4c1f-becb-67fc1b96681b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:59:48 compute-0 nova_compute[348325]: 2025-12-03 18:59:48.187 348329 DEBUG oslo_concurrency.lockutils [req-010e6abc-76a2-40cf-8f89-591834a06b38 req-7057bd03-7d44-4932-b162-faed20e2d415 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-c9937213-8842-4393-90b0-edb363037633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 18:59:48 compute-0 nova_compute[348325]: 2025-12-03 18:59:48.427 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:48 compute-0 podman[447310]: 2025-12-03 18:59:48.916307114 +0000 UTC m=+0.080086226 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 18:59:48 compute-0 podman[447309]: 2025-12-03 18:59:48.920767753 +0000 UTC m=+0.084431772 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  3 18:59:48 compute-0 podman[447311]: 2025-12-03 18:59:48.931429913 +0000 UTC m=+0.087405674 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.buildah.version=1.33.7, config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9)
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.048 348329 DEBUG nova.network.neutron [-] [instance: c9937213-8842-4393-90b0-edb363037633] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.070 348329 INFO nova.compute.manager [-] [instance: c9937213-8842-4393-90b0-edb363037633] Took 1.71 seconds to deallocate network for instance.#033[00m
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.117 348329 DEBUG oslo_concurrency.lockutils [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.118 348329 DEBUG oslo_concurrency.lockutils [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.158 348329 DEBUG nova.compute.manager [req-fd28d32b-4ca5-4178-8aca-9ab320d19b17 req-785ac927-4646-4c00-a067-4bbbb15e70ab 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Received event network-vif-deleted-2c007b4e-e674-4c1f-becb-67fc1b96681b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.252 348329 DEBUG oslo_concurrency.processutils [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 18:59:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 18:59:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2426366029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.743 348329 DEBUG oslo_concurrency.processutils [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.751 348329 DEBUG nova.compute.provider_tree [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.770 348329 DEBUG nova.scheduler.client.report [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 18:59:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 334 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 634 KiB/s wr, 34 op/s
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.799 348329 DEBUG oslo_concurrency.lockutils [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.847 348329 INFO nova.scheduler.client.report [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Deleted allocations for instance c9937213-8842-4393-90b0-edb363037633#033[00m
Dec  3 18:59:49 compute-0 nova_compute[348325]: 2025-12-03 18:59:49.950 348329 DEBUG oslo_concurrency.lockutils [None req-2eed2737-c802-487d-8458-b36f3f429504 78734fd37e3f4665b1cb2cbcba2e9f65 82b2746c38174502bdcb70a8ab378edf - - default default] Lock "c9937213-8842-4393-90b0-edb363037633" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:51 compute-0 nova_compute[348325]: 2025-12-03 18:59:51.266 348329 DEBUG nova.compute.manager [req-7e469370-8199-424c-8ea4-f3f3dba3f04a req-35810c8c-f1bb-4661-b1ab-2539cb1f713f 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Received event network-vif-plugged-2c007b4e-e674-4c1f-becb-67fc1b96681b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:51 compute-0 nova_compute[348325]: 2025-12-03 18:59:51.266 348329 DEBUG oslo_concurrency.lockutils [req-7e469370-8199-424c-8ea4-f3f3dba3f04a req-35810c8c-f1bb-4661-b1ab-2539cb1f713f 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "c9937213-8842-4393-90b0-edb363037633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:51 compute-0 nova_compute[348325]: 2025-12-03 18:59:51.266 348329 DEBUG oslo_concurrency.lockutils [req-7e469370-8199-424c-8ea4-f3f3dba3f04a req-35810c8c-f1bb-4661-b1ab-2539cb1f713f 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "c9937213-8842-4393-90b0-edb363037633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:51 compute-0 nova_compute[348325]: 2025-12-03 18:59:51.267 348329 DEBUG oslo_concurrency.lockutils [req-7e469370-8199-424c-8ea4-f3f3dba3f04a req-35810c8c-f1bb-4661-b1ab-2539cb1f713f 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "c9937213-8842-4393-90b0-edb363037633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:51 compute-0 nova_compute[348325]: 2025-12-03 18:59:51.267 348329 DEBUG nova.compute.manager [req-7e469370-8199-424c-8ea4-f3f3dba3f04a req-35810c8c-f1bb-4661-b1ab-2539cb1f713f 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] No waiting events found dispatching network-vif-plugged-2c007b4e-e674-4c1f-becb-67fc1b96681b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:59:51 compute-0 nova_compute[348325]: 2025-12-03 18:59:51.267 348329 WARNING nova.compute.manager [req-7e469370-8199-424c-8ea4-f3f3dba3f04a req-35810c8c-f1bb-4661-b1ab-2539cb1f713f 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: c9937213-8842-4393-90b0-edb363037633] Received unexpected event network-vif-plugged-2c007b4e-e674-4c1f-becb-67fc1b96681b for instance with vm_state deleted and task_state None.#033[00m
Dec  3 18:59:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:51.528 287105 DEBUG eventlet.wsgi.server [-] (287105) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  3 18:59:51 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:51.530 287105 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Dec  3 18:59:51 compute-0 ovn_metadata_agent[286994]: Accept: */*#015
Dec  3 18:59:51 compute-0 ovn_metadata_agent[286994]: Connection: close#015
Dec  3 18:59:51 compute-0 ovn_metadata_agent[286994]: Content-Type: text/plain#015
Dec  3 18:59:51 compute-0 ovn_metadata_agent[286994]: Host: 169.254.169.254#015
Dec  3 18:59:51 compute-0 ovn_metadata_agent[286994]: User-Agent: curl/7.84.0#015
Dec  3 18:59:51 compute-0 ovn_metadata_agent[286994]: X-Forwarded-For: 10.100.0.3#015
Dec  3 18:59:51 compute-0 ovn_metadata_agent[286994]: X-Ovn-Network-Id: dbd0831a-c570-4257-bca6-ab48802d60d7 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  3 18:59:51 compute-0 nova_compute[348325]: 2025-12-03 18:59:51.619 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 285 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 28 KiB/s wr, 75 op/s
Dec  3 18:59:52 compute-0 podman[447393]: 2025-12-03 18:59:52.941431058 +0000 UTC m=+0.108594513 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, container_name=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 18:59:52 compute-0 podman[447394]: 2025-12-03 18:59:52.946063881 +0000 UTC m=+0.095888862 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:59:52 compute-0 podman[447395]: 2025-12-03 18:59:52.974825693 +0000 UTC m=+0.114930756 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:53.272 287105 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:53.272 287105 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.7425013#033[00m
Dec  3 18:59:53 compute-0 haproxy-metadata-proxy-dbd0831a-c570-4257-bca6-ab48802d60d7[445177]: 10.100.0.3:56740 [03/Dec/2025:18:59:51.526] listener listener/metadata 0/0/0/1746/1746 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:53.412 287105 DEBUG eventlet.wsgi.server [-] (287105) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:53.413 287105 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: Accept: */*#015
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: Connection: close#015
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: Content-Length: 100#015
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: Content-Type: application/x-www-form-urlencoded#015
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: Host: 169.254.169.254#015
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: User-Agent: curl/7.84.0#015
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: X-Forwarded-For: 10.100.0.3#015
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: X-Ovn-Network-Id: dbd0831a-c570-4257-bca6-ab48802d60d7#015
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: #015
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  3 18:59:53 compute-0 nova_compute[348325]: 2025-12-03 18:59:53.430 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 285 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 29 KiB/s wr, 98 op/s
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:53.825 287105 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  3 18:59:53 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:53.826 287105 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.4129333#033[00m
Dec  3 18:59:53 compute-0 haproxy-metadata-proxy-dbd0831a-c570-4257-bca6-ab48802d60d7[445177]: 10.100.0.3:56750 [03/Dec/2025:18:59:53.411] listener listener/metadata 0/0/0/415/415 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec  3 18:59:54 compute-0 ovn_controller[89305]: 2025-12-03T18:59:54Z|00142|binding|INFO|Releasing lport f82febe8-1e88-4e67-9f7a-5af5921c9877 from this chassis (sb_readonly=0)
Dec  3 18:59:54 compute-0 ovn_controller[89305]: 2025-12-03T18:59:54Z|00143|binding|INFO|Releasing lport b52268a2-5f2a-45ba-8c23-e32c70c8253f from this chassis (sb_readonly=0)
Dec  3 18:59:54 compute-0 ovn_controller[89305]: 2025-12-03T18:59:54Z|00144|binding|INFO|Releasing lport a490a544-649c-430c-bdd4-7e78ebd7f7b9 from this chassis (sb_readonly=0)
Dec  3 18:59:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:54 compute-0 nova_compute[348325]: 2025-12-03 18:59:54.569 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 285 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 103 op/s
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.502 348329 DEBUG oslo_concurrency.lockutils [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Acquiring lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.503 348329 DEBUG oslo_concurrency.lockutils [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.503 348329 DEBUG oslo_concurrency.lockutils [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Acquiring lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.504 348329 DEBUG oslo_concurrency.lockutils [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.505 348329 DEBUG oslo_concurrency.lockutils [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.507 348329 INFO nova.compute.manager [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Terminating instance#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.509 348329 DEBUG nova.compute.manager [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 18:59:56 compute-0 kernel: tap53ab68f2-68 (unregistering): left promiscuous mode
Dec  3 18:59:56 compute-0 NetworkManager[49087]: <info>  [1764788396.6123] device (tap53ab68f2-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.622 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:56 compute-0 ovn_controller[89305]: 2025-12-03T18:59:56Z|00145|binding|INFO|Releasing lport 53ab68f2-6888-4d96-9480-47e55e38f422 from this chassis (sb_readonly=0)
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.626 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:56 compute-0 ovn_controller[89305]: 2025-12-03T18:59:56Z|00146|binding|INFO|Setting lport 53ab68f2-6888-4d96-9480-47e55e38f422 down in Southbound
Dec  3 18:59:56 compute-0 ovn_controller[89305]: 2025-12-03T18:59:56Z|00147|binding|INFO|Removing iface tap53ab68f2-68 ovn-installed in OVS
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.638 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:56 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:56.642 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:0c:ea 10.100.0.3'], port_security=['fa:16:3e:a6:0c:ea 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4e045c2f-f0fd-4171-b724-3e38bd7ec4eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dbd0831a-c570-4257-bca6-ab48802d60d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e342f56e114484b986071d1dfb8656a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0660123c-0df0-4256-8ef5-c6d73369a9fb c9884ab7-1707-4498-81ec-fcd45f0f391c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3c144a3-104d-4043-bf40-c75e02dd90b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=53ab68f2-6888-4d96-9480-47e55e38f422) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 18:59:56 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:56.644 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 53ab68f2-6888-4d96-9480-47e55e38f422 in datapath dbd0831a-c570-4257-bca6-ab48802d60d7 unbound from our chassis#033[00m
Dec  3 18:59:56 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:56.648 286999 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dbd0831a-c570-4257-bca6-ab48802d60d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 18:59:56 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:56.653 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[43fc6731-2328-4ced-8d43-f71110390946]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:56 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:56.655 286999 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7 namespace which is not needed anymore#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.670 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:56 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  3 18:59:56 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 43.385s CPU time.
Dec  3 18:59:56 compute-0 systemd-machined[138702]: Machine qemu-11-instance-0000000b terminated.
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.755 348329 INFO nova.virt.libvirt.driver [-] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Instance destroyed successfully.#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.756 348329 DEBUG nova.objects.instance [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lazy-loading 'resources' on Instance uuid 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.780 348329 DEBUG nova.virt.libvirt.vif [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:58:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-2083585917',display_name='tempest-TestServerBasicOps-server-2083585917',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-2083585917',id=11,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFn6diZTg4q+Q2Qfd2HIztiCzt/4kYZPM7VsCMM6f37GRrqJAsGMHRV/wUzcEB54jMt3wOBRWvDnE75JUGheP+1nPMbymNECzCUBvV7xqhypdn3A4RREInS7UiMpzgGxxA==',key_name='tempest-TestServerBasicOps-2099261388',keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:58:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0e342f56e114484b986071d1dfb8656a',ramdisk_id='',reservation_id='r-ty0uxce7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1171439068',owner_user_name='tempest-TestServerBasicOps-1171439068-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:59:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3387836400c4ffa96fc7c863361df79',uuid=4e045c2f-f0fd-4171-b724-3e38bd7ec4eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "53ab68f2-6888-4d96-9480-47e55e38f422", "address": "fa:16:3e:a6:0c:ea", "network": {"id": "dbd0831a-c570-4257-bca6-ab48802d60d7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-940353231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e342f56e114484b986071d1dfb8656a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53ab68f2-68", "ovs_interfaceid": "53ab68f2-6888-4d96-9480-47e55e38f422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.782 348329 DEBUG nova.network.os_vif_util [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Converting VIF {"id": "53ab68f2-6888-4d96-9480-47e55e38f422", "address": "fa:16:3e:a6:0c:ea", "network": {"id": "dbd0831a-c570-4257-bca6-ab48802d60d7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-940353231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e342f56e114484b986071d1dfb8656a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53ab68f2-68", "ovs_interfaceid": "53ab68f2-6888-4d96-9480-47e55e38f422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.783 348329 DEBUG nova.network.os_vif_util [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a6:0c:ea,bridge_name='br-int',has_traffic_filtering=True,id=53ab68f2-6888-4d96-9480-47e55e38f422,network=Network(dbd0831a-c570-4257-bca6-ab48802d60d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53ab68f2-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.783 348329 DEBUG os_vif [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:0c:ea,bridge_name='br-int',has_traffic_filtering=True,id=53ab68f2-6888-4d96-9480-47e55e38f422,network=Network(dbd0831a-c570-4257-bca6-ab48802d60d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53ab68f2-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.785 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.785 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53ab68f2-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.788 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:56 compute-0 nova_compute[348325]: 2025-12-03 18:59:56.792 348329 INFO os_vif [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:0c:ea,bridge_name='br-int',has_traffic_filtering=True,id=53ab68f2-6888-4d96-9480-47e55e38f422,network=Network(dbd0831a-c570-4257-bca6-ab48802d60d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53ab68f2-68')#033[00m
Dec  3 18:59:56 compute-0 neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7[445169]: [NOTICE]   (445174) : haproxy version is 2.8.14-c23fe91
Dec  3 18:59:56 compute-0 neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7[445169]: [NOTICE]   (445174) : path to executable is /usr/sbin/haproxy
Dec  3 18:59:56 compute-0 neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7[445169]: [ALERT]    (445174) : Current worker (445177) exited with code 143 (Terminated)
Dec  3 18:59:56 compute-0 neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7[445169]: [WARNING]  (445174) : All workers exited. Exiting... (0)
Dec  3 18:59:56 compute-0 systemd[1]: libpod-1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1.scope: Deactivated successfully.
Dec  3 18:59:56 compute-0 podman[447484]: 2025-12-03 18:59:56.889374618 +0000 UTC m=+0.083462019 container died 1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 18:59:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1-userdata-shm.mount: Deactivated successfully.
Dec  3 18:59:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d3e9031c37ad1277ab1ead266fc1de47dd8cd034942aacec420b2dbb5016d77-merged.mount: Deactivated successfully.
Dec  3 18:59:56 compute-0 podman[447484]: 2025-12-03 18:59:56.944980795 +0000 UTC m=+0.139068196 container cleanup 1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  3 18:59:56 compute-0 systemd[1]: libpod-conmon-1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1.scope: Deactivated successfully.
Dec  3 18:59:57 compute-0 podman[447528]: 2025-12-03 18:59:57.051018754 +0000 UTC m=+0.069020326 container remove 1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 18:59:57 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:57.066 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[005d9f88-f019-4757-8d84-062da7641d20]: (4, ('Wed Dec  3 06:59:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7 (1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1)\n1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1\nWed Dec  3 06:59:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7 (1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1)\n1b683f986b7ff4ade4a42fc72eeab559390b8804744a62f9426d55948a8c56b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:57 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:57.068 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[49a25732-159e-4358-80e7-732cc55625a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:57 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:57.069 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdbd0831a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 18:59:57 compute-0 kernel: tapdbd0831a-c0: left promiscuous mode
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.081 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.088 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:57 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:57.091 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f89f3c13-97a4-4d14-969d-b00db4133837]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.103 348329 DEBUG nova.compute.manager [req-0da49c47-9f2c-4a00-b63b-ddda3ddb8412 req-f4a06e24-2156-444c-8318-ecfddf216f41 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Received event network-vif-unplugged-53ab68f2-6888-4d96-9480-47e55e38f422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.104 348329 DEBUG oslo_concurrency.lockutils [req-0da49c47-9f2c-4a00-b63b-ddda3ddb8412 req-f4a06e24-2156-444c-8318-ecfddf216f41 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.104 348329 DEBUG oslo_concurrency.lockutils [req-0da49c47-9f2c-4a00-b63b-ddda3ddb8412 req-f4a06e24-2156-444c-8318-ecfddf216f41 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.104 348329 DEBUG oslo_concurrency.lockutils [req-0da49c47-9f2c-4a00-b63b-ddda3ddb8412 req-f4a06e24-2156-444c-8318-ecfddf216f41 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.104 348329 DEBUG nova.compute.manager [req-0da49c47-9f2c-4a00-b63b-ddda3ddb8412 req-f4a06e24-2156-444c-8318-ecfddf216f41 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] No waiting events found dispatching network-vif-unplugged-53ab68f2-6888-4d96-9480-47e55e38f422 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.105 348329 DEBUG nova.compute.manager [req-0da49c47-9f2c-4a00-b63b-ddda3ddb8412 req-f4a06e24-2156-444c-8318-ecfddf216f41 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Received event network-vif-unplugged-53ab68f2-6888-4d96-9480-47e55e38f422 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 18:59:57 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:57.110 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b83599-d27c-4d6c-a077-1b51378214de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:57 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:57.112 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9550a9-bb1c-4c5f-b01f-69780fcd0854]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:57 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:57.130 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[a74a1c7b-788a-4759-a651-b24ba4b144b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660087, 'reachable_time': 36869, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 447544, 'error': None, 'target': 'ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:57 compute-0 systemd[1]: run-netns-ovnmeta\x2ddbd0831a\x2dc570\x2d4257\x2dbca6\x2dab48802d60d7.mount: Deactivated successfully.
Dec  3 18:59:57 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:57.134 287110 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dbd0831a-c570-4257-bca6-ab48802d60d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 18:59:57 compute-0 ovn_metadata_agent[286994]: 2025-12-03 18:59:57.134 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[9dba9b2a-8dcb-4adb-b7f5-408808f37942]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.400 348329 INFO nova.virt.libvirt.driver [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Deleting instance files /var/lib/nova/instances/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_del#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.401 348329 INFO nova.virt.libvirt.driver [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Deletion of /var/lib/nova/instances/4e045c2f-f0fd-4171-b724-3e38bd7ec4eb_del complete#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.468 348329 INFO nova.compute.manager [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.469 348329 DEBUG oslo.service.loopingcall [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.471 348329 DEBUG nova.compute.manager [-] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 18:59:57 compute-0 nova_compute[348325]: 2025-12-03 18:59:57.471 348329 DEBUG nova.network.neutron [-] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 18:59:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 285 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 105 op/s
Dec  3 18:59:58 compute-0 nova_compute[348325]: 2025-12-03 18:59:58.432 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 18:59:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.747 348329 DEBUG nova.compute.manager [req-b53e6194-bab2-48fe-91c7-639c577bade7 req-937cf31d-f8f7-404d-bda3-841b266b2cba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Received event network-vif-plugged-53ab68f2-6888-4d96-9480-47e55e38f422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.749 348329 DEBUG oslo_concurrency.lockutils [req-b53e6194-bab2-48fe-91c7-639c577bade7 req-937cf31d-f8f7-404d-bda3-841b266b2cba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.749 348329 DEBUG oslo_concurrency.lockutils [req-b53e6194-bab2-48fe-91c7-639c577bade7 req-937cf31d-f8f7-404d-bda3-841b266b2cba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.749 348329 DEBUG oslo_concurrency.lockutils [req-b53e6194-bab2-48fe-91c7-639c577bade7 req-937cf31d-f8f7-404d-bda3-841b266b2cba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:59 compute-0 podman[158200]: time="2025-12-03T18:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.750 348329 DEBUG nova.compute.manager [req-b53e6194-bab2-48fe-91c7-639c577bade7 req-937cf31d-f8f7-404d-bda3-841b266b2cba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] No waiting events found dispatching network-vif-plugged-53ab68f2-6888-4d96-9480-47e55e38f422 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.750 348329 WARNING nova.compute.manager [req-b53e6194-bab2-48fe-91c7-639c577bade7 req-937cf31d-f8f7-404d-bda3-841b266b2cba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Received unexpected event network-vif-plugged-53ab68f2-6888-4d96-9480-47e55e38f422 for instance with vm_state active and task_state deleting.#033[00m
Dec  3 18:59:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45044 "" "Go-http-client/1.1"
Dec  3 18:59:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 257 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 24 KiB/s wr, 121 op/s
Dec  3 18:59:59 compute-0 podman[158200]: @ - - [03/Dec/2025:18:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9114 "" "Go-http-client/1.1"
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.937 348329 DEBUG oslo_concurrency.lockutils [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.938 348329 DEBUG oslo_concurrency.lockutils [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.939 348329 DEBUG oslo_concurrency.lockutils [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.940 348329 DEBUG oslo_concurrency.lockutils [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.941 348329 DEBUG oslo_concurrency.lockutils [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.945 348329 INFO nova.compute.manager [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Terminating instance#033[00m
Dec  3 18:59:59 compute-0 nova_compute[348325]: 2025-12-03 18:59:59.948 348329 DEBUG nova.compute.manager [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 19:00:00 compute-0 kernel: tapb709b4ab-58 (unregistering): left promiscuous mode
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.069 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:00 compute-0 ovn_controller[89305]: 2025-12-03T19:00:00Z|00148|binding|INFO|Releasing lport b709b4ab-585a-4aed-9f06-3c9650d54c09 from this chassis (sb_readonly=0)
Dec  3 19:00:00 compute-0 ovn_controller[89305]: 2025-12-03T19:00:00Z|00149|binding|INFO|Setting lport b709b4ab-585a-4aed-9f06-3c9650d54c09 down in Southbound
Dec  3 19:00:00 compute-0 ovn_controller[89305]: 2025-12-03T19:00:00Z|00150|binding|INFO|Removing iface tapb709b4ab-58 ovn-installed in OVS
Dec  3 19:00:00 compute-0 NetworkManager[49087]: <info>  [1764788400.0718] device (tapb709b4ab-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.071 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.076 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:88:19 10.100.0.3'], port_security=['fa:16:3e:6e:88:19 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'eff2304f-0e67-4c93-ae65-20d4ddb87625', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1bc217751704d588f690e1b293cade8', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a1a397ab-712e-407d-b87f-48e90c61a0b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f565d4f-7cf7-4751-884a-5071b91cf9b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=b709b4ab-585a-4aed-9f06-3c9650d54c09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.078 286999 INFO neutron.agent.ovn.metadata.agent [-] Port b709b4ab-585a-4aed-9f06-3c9650d54c09 in datapath c136d05b-f7ca-4f17-81e0-62c23fcd54a3 unbound from our chassis#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.082 286999 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c136d05b-f7ca-4f17-81e0-62c23fcd54a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.083 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[6011f9b7-6401-4ab9-85ed-b6bc49f82896]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.084 286999 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3 namespace which is not needed anymore#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.092 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:00 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  3 19:00:00 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000007.scope: Consumed 43.678s CPU time.
Dec  3 19:00:00 compute-0 systemd-machined[138702]: Machine qemu-12-instance-00000007 terminated.
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.179 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.188 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.190 348329 INFO nova.virt.libvirt.driver [-] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Instance destroyed successfully.#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.191 348329 DEBUG nova.objects.instance [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lazy-loading 'resources' on Instance uuid eff2304f-0e67-4c93-ae65-20d4ddb87625 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.203 348329 DEBUG nova.virt.libvirt.vif [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:57:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-348328150',display_name='tempest-ServerActionsTestJSON-server-348328150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-348328150',id=7,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNxroBr01vWxkEduOl12EjIwANWWsGp13Iiwr/acK4U//64QHGHGw2vAnRgMxYbVrlypsXXM19ulgGJI9w1cDLg4nw16FcL2L12/Hkr+U1wJ9evpJospwbYXLvOwZj+bkQ==',key_name='tempest-keypair-1397268451',keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b1bc217751704d588f690e1b293cade8',ramdisk_id='',reservation_id='r-2us6ueb7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2101343937',owner_user_name='tempest-ServerActionsTestJSON-2101343937-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:58:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a7a79cf3930c41baa4cb453d75b59c70',uuid=eff2304f-0e67-4c93-ae65-20d4ddb87625,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.206 348329 DEBUG nova.network.os_vif_util [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converting VIF {"id": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "address": "fa:16:3e:6e:88:19", "network": {"id": "c136d05b-f7ca-4f17-81e0-62c23fcd54a3", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-203684476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1bc217751704d588f690e1b293cade8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb709b4ab-58", "ovs_interfaceid": "b709b4ab-585a-4aed-9f06-3c9650d54c09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.207 348329 DEBUG nova.network.os_vif_util [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.209 348329 DEBUG os_vif [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.212 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.213 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb709b4ab-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.218 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.222 348329 INFO os_vif [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:88:19,bridge_name='br-int',has_traffic_filtering=True,id=b709b4ab-585a-4aed-9f06-3c9650d54c09,network=Network(c136d05b-f7ca-4f17-81e0-62c23fcd54a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb709b4ab-58')#033[00m
Dec  3 19:00:00 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[445567]: [NOTICE]   (445571) : haproxy version is 2.8.14-c23fe91
Dec  3 19:00:00 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[445567]: [NOTICE]   (445571) : path to executable is /usr/sbin/haproxy
Dec  3 19:00:00 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[445567]: [WARNING]  (445571) : Exiting Master process...
Dec  3 19:00:00 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[445567]: [WARNING]  (445571) : Exiting Master process...
Dec  3 19:00:00 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[445567]: [ALERT]    (445571) : Current worker (445573) exited with code 143 (Terminated)
Dec  3 19:00:00 compute-0 neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3[445567]: [WARNING]  (445571) : All workers exited. Exiting... (0)
Dec  3 19:00:00 compute-0 systemd[1]: libpod-d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d.scope: Deactivated successfully.
Dec  3 19:00:00 compute-0 podman[447585]: 2025-12-03 19:00:00.322894007 +0000 UTC m=+0.067971461 container died d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d-userdata-shm.mount: Deactivated successfully.
Dec  3 19:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-85783d4a42be2c487872c07ab9bf449768f42659261910cff7385c03cf12a47d-merged.mount: Deactivated successfully.
Dec  3 19:00:00 compute-0 podman[447585]: 2025-12-03 19:00:00.37174741 +0000 UTC m=+0.116824864 container cleanup d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  3 19:00:00 compute-0 systemd[1]: libpod-conmon-d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d.scope: Deactivated successfully.
Dec  3 19:00:00 compute-0 podman[447625]: 2025-12-03 19:00:00.495168583 +0000 UTC m=+0.076518319 container remove d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.506 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[3664c677-0834-47d5-aa4c-d9533b98f615]: (4, ('Wed Dec  3 07:00:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3 (d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d)\nd7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d\nWed Dec  3 07:00:00 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3 (d7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d)\nd7fc688e6f9d0922dd98a2dbaaa7752352640bad2fd05d8e186aace59a07de1d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.509 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a71298-197f-41a5-b2d0-c6ba401c57d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.510 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc136d05b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.512 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:00 compute-0 kernel: tapc136d05b-f0: left promiscuous mode
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.534 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.536 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.538 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4ab1bb-6785-4319-bfff-ed6507a51a57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.562 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[725c4723-0319-405f-bd3b-f8bd5b012b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.563 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba1c017-a2e1-4156-9f69-3cbae20a425d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.579 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[67dfb461-e276-4705-808b-fbaef2dc4b76]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661279, 'reachable_time': 29334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 447640, 'error': None, 'target': 'ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:00:00 compute-0 systemd[1]: run-netns-ovnmeta\x2dc136d05b\x2df7ca\x2d4f17\x2d81e0\x2d62c23fcd54a3.mount: Deactivated successfully.
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.585 287110 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c136d05b-f7ca-4f17-81e0-62c23fcd54a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 19:00:00 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:00.586 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4424f0-1834-42cd-9294-7cba93b084b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.865 348329 INFO nova.virt.libvirt.driver [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Deleting instance files /var/lib/nova/instances/eff2304f-0e67-4c93-ae65-20d4ddb87625_del#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.867 348329 INFO nova.virt.libvirt.driver [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Deletion of /var/lib/nova/instances/eff2304f-0e67-4c93-ae65-20d4ddb87625_del complete#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.931 348329 INFO nova.compute.manager [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.932 348329 DEBUG oslo.service.loopingcall [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.933 348329 DEBUG nova.compute.manager [-] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 19:00:00 compute-0 nova_compute[348325]: 2025-12-03 19:00:00.933 348329 DEBUG nova.network.neutron [-] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 19:00:01 compute-0 openstack_network_exporter[365222]: ERROR   19:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:00:01 compute-0 openstack_network_exporter[365222]: ERROR   19:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:00:01 compute-0 openstack_network_exporter[365222]: ERROR   19:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:00:01 compute-0 openstack_network_exporter[365222]: ERROR   19:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:00:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:00:01 compute-0 openstack_network_exporter[365222]: ERROR   19:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:00:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:00:01 compute-0 nova_compute[348325]: 2025-12-03 19:00:01.571 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764788386.5696092, c9937213-8842-4393-90b0-edb363037633 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:00:01 compute-0 nova_compute[348325]: 2025-12-03 19:00:01.572 348329 INFO nova.compute.manager [-] [instance: c9937213-8842-4393-90b0-edb363037633] VM Stopped (Lifecycle Event)#033[00m
Dec  3 19:00:01 compute-0 nova_compute[348325]: 2025-12-03 19:00:01.591 348329 DEBUG nova.compute.manager [None req-fa36c789-4bf3-447e-aa4b-0130ce57b8cc - - - - - -] [instance: c9937213-8842-4393-90b0-edb363037633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:00:01 compute-0 nova_compute[348325]: 2025-12-03 19:00:01.698 348329 DEBUG nova.network.neutron [-] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:00:01 compute-0 nova_compute[348325]: 2025-12-03 19:00:01.735 348329 INFO nova.compute.manager [-] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Took 4.26 seconds to deallocate network for instance.#033[00m
Dec  3 19:00:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 185 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.6 KiB/s wr, 125 op/s
Dec  3 19:00:01 compute-0 nova_compute[348325]: 2025-12-03 19:00:01.791 348329 DEBUG oslo_concurrency.lockutils [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:00:01 compute-0 nova_compute[348325]: 2025-12-03 19:00:01.792 348329 DEBUG oslo_concurrency.lockutils [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:00:01 compute-0 nova_compute[348325]: 2025-12-03 19:00:01.898 348329 DEBUG oslo_concurrency.processutils [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.133 348329 DEBUG nova.compute.manager [req-9b81596a-7339-41c9-8250-09808b83605b req-495292a2-d63d-453c-83ab-fdfd1ebda9bc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-vif-unplugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.135 348329 DEBUG oslo_concurrency.lockutils [req-9b81596a-7339-41c9-8250-09808b83605b req-495292a2-d63d-453c-83ab-fdfd1ebda9bc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.136 348329 DEBUG oslo_concurrency.lockutils [req-9b81596a-7339-41c9-8250-09808b83605b req-495292a2-d63d-453c-83ab-fdfd1ebda9bc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.137 348329 DEBUG oslo_concurrency.lockutils [req-9b81596a-7339-41c9-8250-09808b83605b req-495292a2-d63d-453c-83ab-fdfd1ebda9bc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.138 348329 DEBUG nova.compute.manager [req-9b81596a-7339-41c9-8250-09808b83605b req-495292a2-d63d-453c-83ab-fdfd1ebda9bc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] No waiting events found dispatching network-vif-unplugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.139 348329 DEBUG nova.compute.manager [req-9b81596a-7339-41c9-8250-09808b83605b req-495292a2-d63d-453c-83ab-fdfd1ebda9bc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-vif-unplugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.140 348329 DEBUG nova.compute.manager [req-9b81596a-7339-41c9-8250-09808b83605b req-495292a2-d63d-453c-83ab-fdfd1ebda9bc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Received event network-vif-deleted-53ab68f2-6888-4d96-9480-47e55e38f422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:00:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:00:02 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463610627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.394 348329 DEBUG oslo_concurrency.processutils [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.405 348329 DEBUG nova.compute.provider_tree [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.423 348329 DEBUG nova.scheduler.client.report [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.447 348329 DEBUG oslo_concurrency.lockutils [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.482 348329 INFO nova.scheduler.client.report [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Deleted allocations for instance 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb#033[00m
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.560 348329 DEBUG oslo_concurrency.lockutils [None req-fa72fd24-3afc-47b9-8942-8b31352d378b d3387836400c4ffa96fc7c863361df79 0e342f56e114484b986071d1dfb8656a - - default default] Lock "4e045c2f-f0fd-4171-b724-3e38bd7ec4eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:00:02 compute-0 ovn_controller[89305]: 2025-12-03T19:00:02Z|00151|binding|INFO|Releasing lport f82febe8-1e88-4e67-9f7a-5af5921c9877 from this chassis (sb_readonly=0)
Dec  3 19:00:02 compute-0 nova_compute[348325]: 2025-12-03 19:00:02.894 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.149 348329 DEBUG nova.network.neutron [-] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.173 348329 INFO nova.compute.manager [-] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Took 2.24 seconds to deallocate network for instance.#033[00m
Dec  3 19:00:03 compute-0 ovn_controller[89305]: 2025-12-03T19:00:03Z|00152|binding|INFO|Releasing lport f82febe8-1e88-4e67-9f7a-5af5921c9877 from this chassis (sb_readonly=0)
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.248 348329 DEBUG oslo_concurrency.lockutils [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.249 348329 DEBUG oslo_concurrency.lockutils [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.250 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.320 348329 DEBUG oslo_concurrency.processutils [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.435 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:00:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278894331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.778 348329 DEBUG oslo_concurrency.processutils [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:00:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 151 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 876 KiB/s rd, 7.9 KiB/s wr, 74 op/s
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.786 348329 DEBUG nova.compute.provider_tree [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.803 348329 DEBUG nova.scheduler.client.report [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.836 348329 DEBUG oslo_concurrency.lockutils [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.885 348329 INFO nova.scheduler.client.report [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Deleted allocations for instance eff2304f-0e67-4c93-ae65-20d4ddb87625#033[00m
Dec  3 19:00:03 compute-0 podman[447686]: 2025-12-03 19:00:03.962374335 +0000 UTC m=+0.130942447 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 19:00:03 compute-0 nova_compute[348325]: 2025-12-03 19:00:03.970 348329 DEBUG oslo_concurrency.lockutils [None req-4200eeae-6e87-4be4-9384-e181e8ae35b7 a7a79cf3930c41baa4cb453d75b59c70 b1bc217751704d588f690e1b293cade8 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:00:04 compute-0 nova_compute[348325]: 2025-12-03 19:00:04.321 348329 DEBUG nova.compute.manager [req-35b00a0f-7924-4bf3-afec-5d70bf03bd38 req-2675f466-2108-490b-8ddf-bb577fbfca54 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:00:04 compute-0 nova_compute[348325]: 2025-12-03 19:00:04.322 348329 DEBUG oslo_concurrency.lockutils [req-35b00a0f-7924-4bf3-afec-5d70bf03bd38 req-2675f466-2108-490b-8ddf-bb577fbfca54 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:00:04 compute-0 nova_compute[348325]: 2025-12-03 19:00:04.322 348329 DEBUG oslo_concurrency.lockutils [req-35b00a0f-7924-4bf3-afec-5d70bf03bd38 req-2675f466-2108-490b-8ddf-bb577fbfca54 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:00:04 compute-0 nova_compute[348325]: 2025-12-03 19:00:04.323 348329 DEBUG oslo_concurrency.lockutils [req-35b00a0f-7924-4bf3-afec-5d70bf03bd38 req-2675f466-2108-490b-8ddf-bb577fbfca54 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "eff2304f-0e67-4c93-ae65-20d4ddb87625-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:00:04 compute-0 nova_compute[348325]: 2025-12-03 19:00:04.323 348329 DEBUG nova.compute.manager [req-35b00a0f-7924-4bf3-afec-5d70bf03bd38 req-2675f466-2108-490b-8ddf-bb577fbfca54 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] No waiting events found dispatching network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:00:04 compute-0 nova_compute[348325]: 2025-12-03 19:00:04.324 348329 WARNING nova.compute.manager [req-35b00a0f-7924-4bf3-afec-5d70bf03bd38 req-2675f466-2108-490b-8ddf-bb577fbfca54 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received unexpected event network-vif-plugged-b709b4ab-585a-4aed-9f06-3c9650d54c09 for instance with vm_state deleted and task_state None.#033[00m
Dec  3 19:00:04 compute-0 nova_compute[348325]: 2025-12-03 19:00:04.324 348329 DEBUG nova.compute.manager [req-35b00a0f-7924-4bf3-afec-5d70bf03bd38 req-2675f466-2108-490b-8ddf-bb577fbfca54 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Received event network-vif-deleted-b709b4ab-585a-4aed-9f06-3c9650d54c09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:00:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:05 compute-0 nova_compute[348325]: 2025-12-03 19:00:05.218 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 124 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 205 KiB/s rd, 6.8 KiB/s wr, 64 op/s
Dec  3 19:00:06 compute-0 podman[447711]: 2025-12-03 19:00:06.900979982 +0000 UTC m=+0.065753716 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 19:00:06 compute-0 podman[447710]: 2025-12-03 19:00:06.943952162 +0000 UTC m=+0.108818789 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:00:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 124 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 6.7 KiB/s wr, 58 op/s
Dec  3 19:00:08 compute-0 nova_compute[348325]: 2025-12-03 19:00:08.437 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 124 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 6.7 KiB/s wr, 56 op/s
Dec  3 19:00:10 compute-0 nova_compute[348325]: 2025-12-03 19:00:10.225 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:11 compute-0 nova_compute[348325]: 2025-12-03 19:00:11.753 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764788396.75141, 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:00:11 compute-0 nova_compute[348325]: 2025-12-03 19:00:11.754 348329 INFO nova.compute.manager [-] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] VM Stopped (Lifecycle Event)#033[00m
Dec  3 19:00:11 compute-0 nova_compute[348325]: 2025-12-03 19:00:11.779 348329 DEBUG nova.compute.manager [None req-54d4adee-8e9e-450f-a171-15ba0af09969 - - - - - -] [instance: 4e045c2f-f0fd-4171-b724-3e38bd7ec4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:00:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 124 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 19:00:13 compute-0 nova_compute[348325]: 2025-12-03 19:00:13.441 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 124 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 17 op/s
Dec  3 19:00:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:00:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:00:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:00:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:00:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:00:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:00:13
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.log']
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:00:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:00:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:00:15 compute-0 nova_compute[348325]: 2025-12-03 19:00:15.188 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764788400.1866724, eff2304f-0e67-4c93-ae65-20d4ddb87625 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:00:15 compute-0 nova_compute[348325]: 2025-12-03 19:00:15.189 348329 INFO nova.compute.manager [-] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] VM Stopped (Lifecycle Event)#033[00m
Dec  3 19:00:15 compute-0 nova_compute[348325]: 2025-12-03 19:00:15.215 348329 DEBUG nova.compute.manager [None req-e0727753-7912-4772-8271-21298e24df5a - - - - - -] [instance: eff2304f-0e67-4c93-ae65-20d4ddb87625] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:00:15 compute-0 nova_compute[348325]: 2025-12-03 19:00:15.230 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 124 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 341 B/s wr, 13 op/s
Dec  3 19:00:16 compute-0 nova_compute[348325]: 2025-12-03 19:00:16.497 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:00:17 compute-0 podman[447926]: 2025-12-03 19:00:17.413388433 +0000 UTC m=+0.085707963 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 19:00:17 compute-0 podman[447926]: 2025-12-03 19:00:17.504928408 +0000 UTC m=+0.177247948 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:00:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 124 MiB data, 312 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:00:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:00:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:00:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:00:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:00:18 compute-0 nova_compute[348325]: 2025-12-03 19:00:18.444 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:00:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:00:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:00:19 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:00:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:00:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:00:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:00:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:00:19 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev ea1925b3-4a8c-4a85-bd24-5252569f1a41 does not exist
Dec  3 19:00:19 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev ac7405c5-7fd2-40e0-8238-edfff37eb467 does not exist
Dec  3 19:00:19 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 72c0c577-4637-46d4-870a-fb2c418ecb98 does not exist
Dec  3 19:00:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:00:19 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:00:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:00:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:00:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:00:19 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:00:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:19 compute-0 nova_compute[348325]: 2025-12-03 19:00:19.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:00:19 compute-0 podman[448239]: 2025-12-03 19:00:19.671728818 +0000 UTC m=+0.107509885 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6)
Dec  3 19:00:19 compute-0 podman[448237]: 2025-12-03 19:00:19.683590259 +0000 UTC m=+0.132002875 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  3 19:00:19 compute-0 podman[448238]: 2025-12-03 19:00:19.687640877 +0000 UTC m=+0.136460653 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 19:00:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 124 MiB data, 312 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:00:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:00:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:00:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:00:20 compute-0 nova_compute[348325]: 2025-12-03 19:00:20.233 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:20 compute-0 podman[448412]: 2025-12-03 19:00:20.264113501 +0000 UTC m=+0.060818385 container create ae0185540663c85002a858f1cf1ee12516f4e0d5610a60b594e04de40df25228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 19:00:20 compute-0 systemd[1]: Started libpod-conmon-ae0185540663c85002a858f1cf1ee12516f4e0d5610a60b594e04de40df25228.scope.
Dec  3 19:00:20 compute-0 podman[448412]: 2025-12-03 19:00:20.240600457 +0000 UTC m=+0.037305341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:00:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:00:20 compute-0 podman[448412]: 2025-12-03 19:00:20.393756357 +0000 UTC m=+0.190461251 container init ae0185540663c85002a858f1cf1ee12516f4e0d5610a60b594e04de40df25228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:00:20 compute-0 podman[448412]: 2025-12-03 19:00:20.404854048 +0000 UTC m=+0.201558932 container start ae0185540663c85002a858f1cf1ee12516f4e0d5610a60b594e04de40df25228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 19:00:20 compute-0 podman[448412]: 2025-12-03 19:00:20.410424424 +0000 UTC m=+0.207129298 container attach ae0185540663c85002a858f1cf1ee12516f4e0d5610a60b594e04de40df25228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 19:00:20 compute-0 elated_banach[448427]: 167 167
Dec  3 19:00:20 compute-0 systemd[1]: libpod-ae0185540663c85002a858f1cf1ee12516f4e0d5610a60b594e04de40df25228.scope: Deactivated successfully.
Dec  3 19:00:20 compute-0 conmon[448427]: conmon ae0185540663c85002a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae0185540663c85002a858f1cf1ee12516f4e0d5610a60b594e04de40df25228.scope/container/memory.events
Dec  3 19:00:20 compute-0 podman[448412]: 2025-12-03 19:00:20.415758745 +0000 UTC m=+0.212463629 container died ae0185540663c85002a858f1cf1ee12516f4e0d5610a60b594e04de40df25228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 19:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1341fd4cd999205c723c44234fd2b1860347184fba0e6497358f6bc9ef4ddf6-merged.mount: Deactivated successfully.
Dec  3 19:00:20 compute-0 podman[448412]: 2025-12-03 19:00:20.480785322 +0000 UTC m=+0.277490206 container remove ae0185540663c85002a858f1cf1ee12516f4e0d5610a60b594e04de40df25228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 19:00:20 compute-0 systemd[1]: libpod-conmon-ae0185540663c85002a858f1cf1ee12516f4e0d5610a60b594e04de40df25228.scope: Deactivated successfully.
Dec  3 19:00:20 compute-0 podman[448451]: 2025-12-03 19:00:20.745330231 +0000 UTC m=+0.081095001 container create 794cdcba2b1edd482530baa643897cde64a19201056af2835f495085a73133b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:00:20 compute-0 podman[448451]: 2025-12-03 19:00:20.71008381 +0000 UTC m=+0.045848570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:00:20 compute-0 systemd[1]: Started libpod-conmon-794cdcba2b1edd482530baa643897cde64a19201056af2835f495085a73133b7.scope.
Dec  3 19:00:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81c5be4bc5242d0d26aa16871bedfc610b6c12731b8c297be19447c1ba319cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81c5be4bc5242d0d26aa16871bedfc610b6c12731b8c297be19447c1ba319cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81c5be4bc5242d0d26aa16871bedfc610b6c12731b8c297be19447c1ba319cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81c5be4bc5242d0d26aa16871bedfc610b6c12731b8c297be19447c1ba319cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81c5be4bc5242d0d26aa16871bedfc610b6c12731b8c297be19447c1ba319cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:20 compute-0 podman[448451]: 2025-12-03 19:00:20.932674035 +0000 UTC m=+0.268438815 container init 794cdcba2b1edd482530baa643897cde64a19201056af2835f495085a73133b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 19:00:20 compute-0 podman[448451]: 2025-12-03 19:00:20.94601894 +0000 UTC m=+0.281783720 container start 794cdcba2b1edd482530baa643897cde64a19201056af2835f495085a73133b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bell, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 19:00:20 compute-0 podman[448451]: 2025-12-03 19:00:20.953944714 +0000 UTC m=+0.289709474 container attach 794cdcba2b1edd482530baa643897cde64a19201056af2835f495085a73133b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:00:21 compute-0 ovn_controller[89305]: 2025-12-03T19:00:21Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:91:4c 10.100.3.160
Dec  3 19:00:21 compute-0 ovn_controller[89305]: 2025-12-03T19:00:21Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:91:4c 10.100.3.160
Dec  3 19:00:21 compute-0 nova_compute[348325]: 2025-12-03 19:00:21.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:00:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 143 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 1.3 MiB/s wr, 18 op/s
Dec  3 19:00:22 compute-0 affectionate_bell[448465]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:00:22 compute-0 affectionate_bell[448465]: --> relative data size: 1.0
Dec  3 19:00:22 compute-0 affectionate_bell[448465]: --> All data devices are unavailable
Dec  3 19:00:22 compute-0 systemd[1]: libpod-794cdcba2b1edd482530baa643897cde64a19201056af2835f495085a73133b7.scope: Deactivated successfully.
Dec  3 19:00:22 compute-0 systemd[1]: libpod-794cdcba2b1edd482530baa643897cde64a19201056af2835f495085a73133b7.scope: Consumed 1.149s CPU time.
Dec  3 19:00:22 compute-0 podman[448451]: 2025-12-03 19:00:22.193105118 +0000 UTC m=+1.528869898 container died 794cdcba2b1edd482530baa643897cde64a19201056af2835f495085a73133b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e81c5be4bc5242d0d26aa16871bedfc610b6c12731b8c297be19447c1ba319cf-merged.mount: Deactivated successfully.
Dec  3 19:00:22 compute-0 podman[448451]: 2025-12-03 19:00:22.285037043 +0000 UTC m=+1.620801783 container remove 794cdcba2b1edd482530baa643897cde64a19201056af2835f495085a73133b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 19:00:22 compute-0 systemd[1]: libpod-conmon-794cdcba2b1edd482530baa643897cde64a19201056af2835f495085a73133b7.scope: Deactivated successfully.
Dec  3 19:00:22 compute-0 nova_compute[348325]: 2025-12-03 19:00:22.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:00:22 compute-0 nova_compute[348325]: 2025-12-03 19:00:22.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:00:23 compute-0 podman[448642]: 2025-12-03 19:00:23.252462693 +0000 UTC m=+0.057972137 container create 6800e5c5c1b37264db96a6fcdb269de0bfc3f8afadf2ed1d7a903fd252a21e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chandrasekhar, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 19:00:23 compute-0 systemd[1]: Started libpod-conmon-6800e5c5c1b37264db96a6fcdb269de0bfc3f8afadf2ed1d7a903fd252a21e92.scope.
Dec  3 19:00:23 compute-0 podman[448642]: 2025-12-03 19:00:23.227957544 +0000 UTC m=+0.033467048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:00:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:23.356 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:00:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:23.358 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:00:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:23.359 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:00:23 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:00:23 compute-0 podman[448642]: 2025-12-03 19:00:23.380372266 +0000 UTC m=+0.185881730 container init 6800e5c5c1b37264db96a6fcdb269de0bfc3f8afadf2ed1d7a903fd252a21e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chandrasekhar, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:00:23 compute-0 podman[448642]: 2025-12-03 19:00:23.391845476 +0000 UTC m=+0.197354930 container start 6800e5c5c1b37264db96a6fcdb269de0bfc3f8afadf2ed1d7a903fd252a21e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 19:00:23 compute-0 podman[448642]: 2025-12-03 19:00:23.397047323 +0000 UTC m=+0.202556787 container attach 6800e5c5c1b37264db96a6fcdb269de0bfc3f8afadf2ed1d7a903fd252a21e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 19:00:23 compute-0 silly_chandrasekhar[448676]: 167 167
Dec  3 19:00:23 compute-0 systemd[1]: libpod-6800e5c5c1b37264db96a6fcdb269de0bfc3f8afadf2ed1d7a903fd252a21e92.scope: Deactivated successfully.
Dec  3 19:00:23 compute-0 conmon[448676]: conmon 6800e5c5c1b37264db96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6800e5c5c1b37264db96a6fcdb269de0bfc3f8afadf2ed1d7a903fd252a21e92.scope/container/memory.events
Dec  3 19:00:23 compute-0 podman[448642]: 2025-12-03 19:00:23.402556677 +0000 UTC m=+0.208066141 container died 6800e5c5c1b37264db96a6fcdb269de0bfc3f8afadf2ed1d7a903fd252a21e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chandrasekhar, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 19:00:23 compute-0 podman[448655]: 2025-12-03 19:00:23.403150391 +0000 UTC m=+0.095911822 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=)
Dec  3 19:00:23 compute-0 podman[448659]: 2025-12-03 19:00:23.417647026 +0000 UTC m=+0.104109564 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:00:23 compute-0 podman[448658]: 2025-12-03 19:00:23.427053255 +0000 UTC m=+0.114677111 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 19:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-05dce7e6190036881ec29a81b78aa2d3c2187700d199268d8a4c9b6196c34998-merged.mount: Deactivated successfully.
Dec  3 19:00:23 compute-0 nova_compute[348325]: 2025-12-03 19:00:23.446 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:23 compute-0 podman[448642]: 2025-12-03 19:00:23.45715374 +0000 UTC m=+0.262663204 container remove 6800e5c5c1b37264db96a6fcdb269de0bfc3f8afadf2ed1d7a903fd252a21e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:00:23 compute-0 systemd[1]: libpod-conmon-6800e5c5c1b37264db96a6fcdb269de0bfc3f8afadf2ed1d7a903fd252a21e92.scope: Deactivated successfully.
Dec  3 19:00:23 compute-0 podman[448732]: 2025-12-03 19:00:23.651998597 +0000 UTC m=+0.049267243 container create a483fbd8addd477a669d9ba0110dc973c709dd222b8a8172813e67c8586d69de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wright, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:00:23 compute-0 systemd[1]: Started libpod-conmon-a483fbd8addd477a669d9ba0110dc973c709dd222b8a8172813e67c8586d69de.scope.
Dec  3 19:00:23 compute-0 podman[448732]: 2025-12-03 19:00:23.633150387 +0000 UTC m=+0.030419053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:00:23 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f42c6ce82b298059b95c8f9016c08eabd71bc430281ab11951a5ff0ef42a792/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f42c6ce82b298059b95c8f9016c08eabd71bc430281ab11951a5ff0ef42a792/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f42c6ce82b298059b95c8f9016c08eabd71bc430281ab11951a5ff0ef42a792/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f42c6ce82b298059b95c8f9016c08eabd71bc430281ab11951a5ff0ef42a792/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:23 compute-0 podman[448732]: 2025-12-03 19:00:23.791424021 +0000 UTC m=+0.188692687 container init a483fbd8addd477a669d9ba0110dc973c709dd222b8a8172813e67c8586d69de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 19:00:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 152 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 2.0 MiB/s wr, 36 op/s
Dec  3 19:00:23 compute-0 podman[448732]: 2025-12-03 19:00:23.819695102 +0000 UTC m=+0.216963788 container start a483fbd8addd477a669d9ba0110dc973c709dd222b8a8172813e67c8586d69de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wright, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:00:23 compute-0 podman[448732]: 2025-12-03 19:00:23.836349359 +0000 UTC m=+0.233618025 container attach a483fbd8addd477a669d9ba0110dc973c709dd222b8a8172813e67c8586d69de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:00:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007511334407761933 of space, bias 1.0, pg target 0.22534003223285798 quantized to 32 (current 32)
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:00:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:00:24 compute-0 serene_wright[448749]: {
Dec  3 19:00:24 compute-0 serene_wright[448749]:    "0": [
Dec  3 19:00:24 compute-0 serene_wright[448749]:        {
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "devices": [
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "/dev/loop3"
Dec  3 19:00:24 compute-0 serene_wright[448749]:            ],
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_name": "ceph_lv0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_size": "21470642176",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "name": "ceph_lv0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "tags": {
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.cluster_name": "ceph",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.crush_device_class": "",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.encrypted": "0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.osd_id": "0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.type": "block",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.vdo": "0"
Dec  3 19:00:24 compute-0 serene_wright[448749]:            },
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "type": "block",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "vg_name": "ceph_vg0"
Dec  3 19:00:24 compute-0 serene_wright[448749]:        }
Dec  3 19:00:24 compute-0 serene_wright[448749]:    ],
Dec  3 19:00:24 compute-0 serene_wright[448749]:    "1": [
Dec  3 19:00:24 compute-0 serene_wright[448749]:        {
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "devices": [
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "/dev/loop4"
Dec  3 19:00:24 compute-0 serene_wright[448749]:            ],
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_name": "ceph_lv1",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_size": "21470642176",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "name": "ceph_lv1",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "tags": {
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.cluster_name": "ceph",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.crush_device_class": "",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.encrypted": "0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.osd_id": "1",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.type": "block",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.vdo": "0"
Dec  3 19:00:24 compute-0 serene_wright[448749]:            },
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "type": "block",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "vg_name": "ceph_vg1"
Dec  3 19:00:24 compute-0 serene_wright[448749]:        }
Dec  3 19:00:24 compute-0 serene_wright[448749]:    ],
Dec  3 19:00:24 compute-0 serene_wright[448749]:    "2": [
Dec  3 19:00:24 compute-0 serene_wright[448749]:        {
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "devices": [
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "/dev/loop5"
Dec  3 19:00:24 compute-0 serene_wright[448749]:            ],
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_name": "ceph_lv2",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_size": "21470642176",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "name": "ceph_lv2",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "tags": {
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.cluster_name": "ceph",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.crush_device_class": "",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.encrypted": "0",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.osd_id": "2",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.type": "block",
Dec  3 19:00:24 compute-0 serene_wright[448749]:                "ceph.vdo": "0"
Dec  3 19:00:24 compute-0 serene_wright[448749]:            },
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "type": "block",
Dec  3 19:00:24 compute-0 serene_wright[448749]:            "vg_name": "ceph_vg2"
Dec  3 19:00:24 compute-0 serene_wright[448749]:        }
Dec  3 19:00:24 compute-0 serene_wright[448749]:    ]
Dec  3 19:00:24 compute-0 serene_wright[448749]: }
Dec  3 19:00:24 compute-0 systemd[1]: libpod-a483fbd8addd477a669d9ba0110dc973c709dd222b8a8172813e67c8586d69de.scope: Deactivated successfully.
Dec  3 19:00:24 compute-0 podman[448732]: 2025-12-03 19:00:24.632735272 +0000 UTC m=+1.030003948 container died a483fbd8addd477a669d9ba0110dc973c709dd222b8a8172813e67c8586d69de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wright, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f42c6ce82b298059b95c8f9016c08eabd71bc430281ab11951a5ff0ef42a792-merged.mount: Deactivated successfully.
Dec  3 19:00:24 compute-0 podman[448732]: 2025-12-03 19:00:24.729188637 +0000 UTC m=+1.126457323 container remove a483fbd8addd477a669d9ba0110dc973c709dd222b8a8172813e67c8586d69de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:00:24 compute-0 systemd[1]: libpod-conmon-a483fbd8addd477a669d9ba0110dc973c709dd222b8a8172813e67c8586d69de.scope: Deactivated successfully.
Dec  3 19:00:25 compute-0 nova_compute[348325]: 2025-12-03 19:00:25.238 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:25 compute-0 nova_compute[348325]: 2025-12-03 19:00:25.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:00:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 155 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 215 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec  3 19:00:25 compute-0 podman[448911]: 2025-12-03 19:00:25.847368488 +0000 UTC m=+0.086779520 container create 31d5566b1ba199855e7a6ef86efbf542f401dca92d2a728400eebc88f1863c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 19:00:25 compute-0 podman[448911]: 2025-12-03 19:00:25.810742694 +0000 UTC m=+0.050153836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:00:25 compute-0 systemd[1]: Started libpod-conmon-31d5566b1ba199855e7a6ef86efbf542f401dca92d2a728400eebc88f1863c58.scope.
Dec  3 19:00:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:00:26 compute-0 podman[448911]: 2025-12-03 19:00:26.020764271 +0000 UTC m=+0.260175383 container init 31d5566b1ba199855e7a6ef86efbf542f401dca92d2a728400eebc88f1863c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:00:26 compute-0 podman[448911]: 2025-12-03 19:00:26.041310922 +0000 UTC m=+0.280721954 container start 31d5566b1ba199855e7a6ef86efbf542f401dca92d2a728400eebc88f1863c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:00:26 compute-0 podman[448911]: 2025-12-03 19:00:26.048321463 +0000 UTC m=+0.287732525 container attach 31d5566b1ba199855e7a6ef86efbf542f401dca92d2a728400eebc88f1863c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:00:26 compute-0 pedantic_chaplygin[448927]: 167 167
Dec  3 19:00:26 compute-0 systemd[1]: libpod-31d5566b1ba199855e7a6ef86efbf542f401dca92d2a728400eebc88f1863c58.scope: Deactivated successfully.
Dec  3 19:00:26 compute-0 conmon[448927]: conmon 31d5566b1ba199855e7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31d5566b1ba199855e7a6ef86efbf542f401dca92d2a728400eebc88f1863c58.scope/container/memory.events
Dec  3 19:00:26 compute-0 podman[448911]: 2025-12-03 19:00:26.063664698 +0000 UTC m=+0.303075790 container died 31d5566b1ba199855e7a6ef86efbf542f401dca92d2a728400eebc88f1863c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:00:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-72a1c80f8e1d9cb83f7cda6237094446131c04a11d55742d2cf9e0e7719f15de-merged.mount: Deactivated successfully.
Dec  3 19:00:26 compute-0 podman[448911]: 2025-12-03 19:00:26.155116831 +0000 UTC m=+0.394527863 container remove 31d5566b1ba199855e7a6ef86efbf542f401dca92d2a728400eebc88f1863c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:00:26 compute-0 systemd[1]: libpod-conmon-31d5566b1ba199855e7a6ef86efbf542f401dca92d2a728400eebc88f1863c58.scope: Deactivated successfully.
Dec  3 19:00:26 compute-0 podman[448953]: 2025-12-03 19:00:26.392810194 +0000 UTC m=+0.091860923 container create cf5fe59df0c88450ecfff30a7877c9e7a7d9d0ebe2f6fc0729ac49dee14d3f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:00:26 compute-0 podman[448953]: 2025-12-03 19:00:26.356686833 +0000 UTC m=+0.055737582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:00:26 compute-0 systemd[1]: Started libpod-conmon-cf5fe59df0c88450ecfff30a7877c9e7a7d9d0ebe2f6fc0729ac49dee14d3f5b.scope.
Dec  3 19:00:26 compute-0 nova_compute[348325]: 2025-12-03 19:00:26.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:00:26 compute-0 nova_compute[348325]: 2025-12-03 19:00:26.492 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:00:26 compute-0 nova_compute[348325]: 2025-12-03 19:00:26.493 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:00:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb47d6c77444928e73600a5d1ce6ef87955ab668510b461e5b3e2d8e286e763/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb47d6c77444928e73600a5d1ce6ef87955ab668510b461e5b3e2d8e286e763/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb47d6c77444928e73600a5d1ce6ef87955ab668510b461e5b3e2d8e286e763/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb47d6c77444928e73600a5d1ce6ef87955ab668510b461e5b3e2d8e286e763/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:00:26 compute-0 podman[448953]: 2025-12-03 19:00:26.555805844 +0000 UTC m=+0.254856583 container init cf5fe59df0c88450ecfff30a7877c9e7a7d9d0ebe2f6fc0729ac49dee14d3f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 19:00:26 compute-0 podman[448953]: 2025-12-03 19:00:26.585336525 +0000 UTC m=+0.284387254 container start cf5fe59df0c88450ecfff30a7877c9e7a7d9d0ebe2f6fc0729ac49dee14d3f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jemison, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:00:26 compute-0 podman[448953]: 2025-12-03 19:00:26.593163987 +0000 UTC m=+0.292214766 container attach cf5fe59df0c88450ecfff30a7877c9e7a7d9d0ebe2f6fc0729ac49dee14d3f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:00:27 compute-0 nova_compute[348325]: 2025-12-03 19:00:27.023 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:00:27 compute-0 nova_compute[348325]: 2025-12-03 19:00:27.024 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:00:27 compute-0 nova_compute[348325]: 2025-12-03 19:00:27.025 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:00:27 compute-0 nova_compute[348325]: 2025-12-03 19:00:27.026 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:00:27 compute-0 confident_jemison[448967]: {
Dec  3 19:00:27 compute-0 confident_jemison[448967]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "osd_id": 1,
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "type": "bluestore"
Dec  3 19:00:27 compute-0 confident_jemison[448967]:    },
Dec  3 19:00:27 compute-0 confident_jemison[448967]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "osd_id": 2,
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "type": "bluestore"
Dec  3 19:00:27 compute-0 confident_jemison[448967]:    },
Dec  3 19:00:27 compute-0 confident_jemison[448967]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "osd_id": 0,
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:00:27 compute-0 confident_jemison[448967]:        "type": "bluestore"
Dec  3 19:00:27 compute-0 confident_jemison[448967]:    }
Dec  3 19:00:27 compute-0 confident_jemison[448967]: }
Dec  3 19:00:27 compute-0 systemd[1]: libpod-cf5fe59df0c88450ecfff30a7877c9e7a7d9d0ebe2f6fc0729ac49dee14d3f5b.scope: Deactivated successfully.
Dec  3 19:00:27 compute-0 systemd[1]: libpod-cf5fe59df0c88450ecfff30a7877c9e7a7d9d0ebe2f6fc0729ac49dee14d3f5b.scope: Consumed 1.079s CPU time.
Dec  3 19:00:27 compute-0 podman[448953]: 2025-12-03 19:00:27.679749096 +0000 UTC m=+1.378799835 container died cf5fe59df0c88450ecfff30a7877c9e7a7d9d0ebe2f6fc0729ac49dee14d3f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jemison, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 19:00:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eb47d6c77444928e73600a5d1ce6ef87955ab668510b461e5b3e2d8e286e763-merged.mount: Deactivated successfully.
Dec  3 19:00:27 compute-0 podman[448953]: 2025-12-03 19:00:27.751508037 +0000 UTC m=+1.450558766 container remove cf5fe59df0c88450ecfff30a7877c9e7a7d9d0ebe2f6fc0729ac49dee14d3f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jemison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:00:27 compute-0 systemd[1]: libpod-conmon-cf5fe59df0c88450ecfff30a7877c9e7a7d9d0ebe2f6fc0729ac49dee14d3f5b.scope: Deactivated successfully.
Dec  3 19:00:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:00:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:00:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:00:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 157 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 216 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec  3 19:00:27 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:00:27 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 975d9f7a-3718-4a21-8e4d-170a6872856e does not exist
Dec  3 19:00:27 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 81252d13-c6d4-455b-843a-1db548e89a23 does not exist
Dec  3 19:00:28 compute-0 nova_compute[348325]: 2025-12-03 19:00:28.449 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:00:28 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:00:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:29 compute-0 nova_compute[348325]: 2025-12-03 19:00:29.652 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updating instance_info_cache with network_info: [{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:00:29 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:29.746 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:00:29 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:29.748 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 19:00:29 compute-0 podman[158200]: time="2025-12-03T19:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:00:29 compute-0 nova_compute[348325]: 2025-12-03 19:00:29.763 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:00:29 compute-0 nova_compute[348325]: 2025-12-03 19:00:29.785 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:00:29 compute-0 nova_compute[348325]: 2025-12-03 19:00:29.786 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:00:29 compute-0 nova_compute[348325]: 2025-12-03 19:00:29.787 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:00:29 compute-0 nova_compute[348325]: 2025-12-03 19:00:29.787 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:00:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8649 "" "Go-http-client/1.1"
Dec  3 19:00:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 157 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 216 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec  3 19:00:30 compute-0 nova_compute[348325]: 2025-12-03 19:00:30.244 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:00:30.751 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:00:31 compute-0 openstack_network_exporter[365222]: ERROR   19:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:00:31 compute-0 openstack_network_exporter[365222]: ERROR   19:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:00:31 compute-0 openstack_network_exporter[365222]: ERROR   19:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:00:31 compute-0 openstack_network_exporter[365222]: ERROR   19:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:00:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:00:31 compute-0 openstack_network_exporter[365222]: ERROR   19:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:00:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:00:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 216 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Dec  3 19:00:33 compute-0 nova_compute[348325]: 2025-12-03 19:00:33.452 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:33 compute-0 nova_compute[348325]: 2025-12-03 19:00:33.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:00:33 compute-0 nova_compute[348325]: 2025-12-03 19:00:33.513 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:00:33 compute-0 nova_compute[348325]: 2025-12-03 19:00:33.513 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:00:33 compute-0 nova_compute[348325]: 2025-12-03 19:00:33.514 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:00:33 compute-0 nova_compute[348325]: 2025-12-03 19:00:33.514 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:00:33 compute-0 nova_compute[348325]: 2025-12-03 19:00:33.514 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:00:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 872 KiB/s wr, 37 op/s
Dec  3 19:00:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:00:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/149615609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:00:33 compute-0 nova_compute[348325]: 2025-12-03 19:00:33.980 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:00:34 compute-0 nova_compute[348325]: 2025-12-03 19:00:34.079 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:00:34 compute-0 nova_compute[348325]: 2025-12-03 19:00:34.080 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:00:34 compute-0 podman[449087]: 2025-12-03 19:00:34.114963272 +0000 UTC m=+0.073615069 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 19:00:34 compute-0 nova_compute[348325]: 2025-12-03 19:00:34.395 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:00:34 compute-0 nova_compute[348325]: 2025-12-03 19:00:34.396 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3783MB free_disk=59.942832946777344GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:00:34 compute-0 nova_compute[348325]: 2025-12-03 19:00:34.396 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:00:34 compute-0 nova_compute[348325]: 2025-12-03 19:00:34.397 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:00:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:34 compute-0 nova_compute[348325]: 2025-12-03 19:00:34.496 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:00:34 compute-0 nova_compute[348325]: 2025-12-03 19:00:34.497 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:00:34 compute-0 nova_compute[348325]: 2025-12-03 19:00:34.497 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:00:34 compute-0 nova_compute[348325]: 2025-12-03 19:00:34.534 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:00:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:00:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720393911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:00:35 compute-0 nova_compute[348325]: 2025-12-03 19:00:35.038 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:00:35 compute-0 nova_compute[348325]: 2025-12-03 19:00:35.049 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:00:35 compute-0 nova_compute[348325]: 2025-12-03 19:00:35.074 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:00:35 compute-0 nova_compute[348325]: 2025-12-03 19:00:35.105 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:00:35 compute-0 nova_compute[348325]: 2025-12-03 19:00:35.106 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:00:35 compute-0 nova_compute[348325]: 2025-12-03 19:00:35.250 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 118 KiB/s wr, 19 op/s
Dec  3 19:00:36 compute-0 nova_compute[348325]: 2025-12-03 19:00:36.099 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:00:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:00:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/512132530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:00:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:00:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/512132530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:00:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 9.3 KiB/s wr, 0 op/s
Dec  3 19:00:37 compute-0 podman[449134]: 2025-12-03 19:00:37.991250532 +0000 UTC m=+0.135741176 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  3 19:00:38 compute-0 podman[449133]: 2025-12-03 19:00:38.034824055 +0000 UTC m=+0.190810959 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 19:00:38 compute-0 nova_compute[348325]: 2025-12-03 19:00:38.454 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.479693) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788439479795, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1187, "num_deletes": 255, "total_data_size": 1737257, "memory_usage": 1769712, "flush_reason": "Manual Compaction"}
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788439497196, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1698686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36863, "largest_seqno": 38049, "table_properties": {"data_size": 1693003, "index_size": 3014, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12085, "raw_average_key_size": 19, "raw_value_size": 1681551, "raw_average_value_size": 2712, "num_data_blocks": 135, "num_entries": 620, "num_filter_entries": 620, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764788329, "oldest_key_time": 1764788329, "file_creation_time": 1764788439, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 17613 microseconds, and 10256 cpu microseconds.
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.497309) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1698686 bytes OK
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.497336) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.500272) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.500296) EVENT_LOG_v1 {"time_micros": 1764788439500289, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.500319) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1731801, prev total WAL file size 1731801, number of live WAL files 2.
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.501485) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323536' seq:72057594037927935, type:22 .. '6C6F676D0031353037' seq:0, type:0; will stop at (end)
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1658KB)], [83(8541KB)]
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788439501528, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10445078, "oldest_snapshot_seqno": -1}
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 5810 keys, 10338445 bytes, temperature: kUnknown
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788439573403, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 10338445, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10297830, "index_size": 25017, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 147218, "raw_average_key_size": 25, "raw_value_size": 10191111, "raw_average_value_size": 1754, "num_data_blocks": 1030, "num_entries": 5810, "num_filter_entries": 5810, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764788439, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.573712) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10338445 bytes
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.575709) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.1 rd, 143.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.3 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(12.2) write-amplify(6.1) OK, records in: 6336, records dropped: 526 output_compression: NoCompression
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.575732) EVENT_LOG_v1 {"time_micros": 1764788439575722, "job": 48, "event": "compaction_finished", "compaction_time_micros": 72005, "compaction_time_cpu_micros": 34267, "output_level": 6, "num_output_files": 1, "total_output_size": 10338445, "num_input_records": 6336, "num_output_records": 5810, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788439576134, "job": 48, "event": "table_file_deletion", "file_number": 85}
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788439577696, "job": 48, "event": "table_file_deletion", "file_number": 83}
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.501248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.577799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.577803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.577805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.577806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:00:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:00:39.577808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:00:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Dec  3 19:00:40 compute-0 nova_compute[348325]: 2025-12-03 19:00:40.256 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Dec  3 19:00:43 compute-0 nova_compute[348325]: 2025-12-03 19:00:43.457 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 19:00:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:00:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:00:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:00:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:00:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:00:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:00:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:45 compute-0 nova_compute[348325]: 2025-12-03 19:00:45.263 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 19:00:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 19:00:48 compute-0 nova_compute[348325]: 2025-12-03 19:00:48.459 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 19:00:49 compute-0 podman[449183]: 2025-12-03 19:00:49.938344504 +0000 UTC m=+0.091755721 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:00:49 compute-0 podman[449182]: 2025-12-03 19:00:49.958148167 +0000 UTC m=+0.106153222 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  3 19:00:49 compute-0 podman[449184]: 2025-12-03 19:00:49.968991382 +0000 UTC m=+0.113573373 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 19:00:50 compute-0 nova_compute[348325]: 2025-12-03 19:00:50.268 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 19:00:53 compute-0 nova_compute[348325]: 2025-12-03 19:00:53.462 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.7 KiB/s wr, 0 op/s
Dec  3 19:00:53 compute-0 podman[449244]: 2025-12-03 19:00:53.930895543 +0000 UTC m=+0.096612539 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  3 19:00:53 compute-0 podman[449245]: 2025-12-03 19:00:53.933126118 +0000 UTC m=+0.093336100 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 19:00:53 compute-0 podman[449243]: 2025-12-03 19:00:53.949122739 +0000 UTC m=+0.117172261 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, container_name=kepler, name=ubi9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 19:00:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:55 compute-0 nova_compute[348325]: 2025-12-03 19:00:55.273 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 19:00:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 19:00:58 compute-0 nova_compute[348325]: 2025-12-03 19:00:58.465 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:00:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:00:59 compute-0 podman[158200]: time="2025-12-03T19:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:00:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:00:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Dec  3 19:00:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 19:01:00 compute-0 nova_compute[348325]: 2025-12-03 19:01:00.276 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:01 compute-0 openstack_network_exporter[365222]: ERROR   19:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:01:01 compute-0 openstack_network_exporter[365222]: ERROR   19:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:01:01 compute-0 openstack_network_exporter[365222]: ERROR   19:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:01:01 compute-0 openstack_network_exporter[365222]: ERROR   19:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:01:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:01:01 compute-0 openstack_network_exporter[365222]: ERROR   19:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:01:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:01:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Dec  3 19:01:03 compute-0 nova_compute[348325]: 2025-12-03 19:01:03.468 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Dec  3 19:01:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:04 compute-0 podman[449308]: 2025-12-03 19:01:04.966379984 +0000 UTC m=+0.124382467 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 19:01:05 compute-0 nova_compute[348325]: 2025-12-03 19:01:05.281 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 19:01:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 341 B/s wr, 0 op/s
Dec  3 19:01:08 compute-0 nova_compute[348325]: 2025-12-03 19:01:08.472 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:08 compute-0 podman[449330]: 2025-12-03 19:01:08.922941775 +0000 UTC m=+0.087726012 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  3 19:01:08 compute-0 podman[449329]: 2025-12-03 19:01:08.972901665 +0000 UTC m=+0.141497375 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:01:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 341 B/s wr, 0 op/s
Dec  3 19:01:10 compute-0 nova_compute[348325]: 2025-12-03 19:01:10.284 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 341 B/s wr, 0 op/s
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.254 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.254 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.254 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8eee8950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.265 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 19:01:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:13.266 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a4fc45c7-44e4-4b50-a3e0-98de13268f88 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 19:01:13 compute-0 nova_compute[348325]: 2025-12-03 19:01:13.476 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  3 19:01:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:01:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:01:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:01:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:01:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:01:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:01:14
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.log', '.mgr', '.rgw.root', 'volumes', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.data']
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.228 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Wed, 03 Dec 2025 19:01:13 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-41dd4435-d690-46d0-a556-81435e5c4280 x-openstack-request-id: req-41dd4435-d690-46d0-a556-81435e5c4280 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.228 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a4fc45c7-44e4-4b50-a3e0-98de13268f88", "name": "te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya", "status": "ACTIVE", "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "user_id": "5b5e6c2a7cce4e3b96611203def80123", "metadata": {"metering.server_group": "d721c97c-b9eb-44f9-a826-1b99239b172a"}, "hostId": "d87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be", "image": {"id": "29e9e995-880d-46f8-bdd0-149d4e107ea9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/29e9e995-880d-46f8-bdd0-149d4e107ea9"}]}, "flavor": {"id": "a94cfbfb-a20a-4689-ac91-e7436db75880", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a94cfbfb-a20a-4689-ac91-e7436db75880"}]}, "created": "2025-12-03T18:59:27Z", "updated": "2025-12-03T18:59:46Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.160", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8d:91:4c"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a4fc45c7-44e4-4b50-a3e0-98de13268f88"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a4fc45c7-44e4-4b50-a3e0-98de13268f88"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T18:59:46.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.228 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a4fc45c7-44e4-4b50-a3e0-98de13268f88 used request id req-41dd4435-d690-46d0-a556-81435e5c4280 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.230 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a4fc45c7-44e4-4b50-a3e0-98de13268f88', 'name': 'te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.231 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.231 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.232 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.232 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T19:01:14.232236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.240 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a4fc45c7-44e4-4b50-a3e0-98de13268f88 / tapcf729fa8-95 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.241 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.242 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.243 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.243 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.243 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.243 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.244 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.244 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.244 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T19:01:14.244003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.245 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.246 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.246 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.246 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.247 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.247 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T19:01:14.247331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.247 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.248 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.249 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.249 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.249 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.249 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.250 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.250 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.251 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T19:01:14.250169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.252 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.252 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.252 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.253 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.253 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T19:01:14.253310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.253 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.254 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya>]
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.255 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.255 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.255 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.256 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.256 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.257 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T19:01:14.256057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.258 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.258 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.258 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.259 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T19:01:14.258976) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.276 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.277 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.278 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.278 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.278 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.278 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.279 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.279 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.279 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T19:01:14.278968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.280 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.280 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.280 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.280 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T19:01:14.280818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.310 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/memory.usage volume: 43.62890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.311 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T19:01:14.312298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.312 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.316 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.316 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.316 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.317 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.317 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T19:01:14.316735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.317 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T19:01:14.318055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.357 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 29154304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.357 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.358 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.359 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T19:01:14.359000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.359 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya>]
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.360 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.361 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 1719418496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T19:01:14.360552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.361 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 125457767 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.362 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.363 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 1046 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T19:01:14.362410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.363 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.364 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.365 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T19:01:14.364402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.365 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.366 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.367 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 72806400 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T19:01:14.366502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.367 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.367 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.368 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.368 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T19:01:14.368304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.369 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.369 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.370 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.370 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 8461903207 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.370 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T19:01:14.369896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.371 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.371 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.372 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 307 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T19:01:14.371725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.372 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.373 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.373 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.373 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.373 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.374 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T19:01:14.373591) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.374 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.375 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.375 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.375 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T19:01:14.375697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.376 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.376 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.377 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.377 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.377 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.378 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T19:01:14.377494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.378 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.378 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.378 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.378 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.379 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T19:01:14.379025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.380 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.380 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T19:01:14.380708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.381 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.381 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.382 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/cpu volume: 84590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T19:01:14.382277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.383 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.383 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.383 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.384 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.384 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.384 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.384 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.384 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.384 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.384 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.384 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.385 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.385 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.385 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.385 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.385 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.385 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.385 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.385 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.385 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.386 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.386 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.386 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.386 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.386 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:01:14.386 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:01:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:01:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:01:15 compute-0 nova_compute[348325]: 2025-12-03 19:01:15.290 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  3 19:01:16 compute-0 nova_compute[348325]: 2025-12-03 19:01:16.509 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:01:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  3 19:01:18 compute-0 nova_compute[348325]: 2025-12-03 19:01:18.478 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:01:20 compute-0 nova_compute[348325]: 2025-12-03 19:01:20.293 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:20 compute-0 nova_compute[348325]: 2025-12-03 19:01:20.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:01:20 compute-0 podman[449375]: 2025-12-03 19:01:20.94061837 +0000 UTC m=+0.097636714 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 19:01:20 compute-0 podman[449376]: 2025-12-03 19:01:20.949777784 +0000 UTC m=+0.089101586 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 19:01:20 compute-0 podman[449377]: 2025-12-03 19:01:20.973135953 +0000 UTC m=+0.107982676 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=)
Dec  3 19:01:21 compute-0 nova_compute[348325]: 2025-12-03 19:01:21.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:01:21 compute-0 nova_compute[348325]: 2025-12-03 19:01:21.520 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:01:21 compute-0 nova_compute[348325]: 2025-12-03 19:01:21.521 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:01:21 compute-0 nova_compute[348325]: 2025-12-03 19:01:21.544 348329 DEBUG nova.compute.manager [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 19:01:21 compute-0 nova_compute[348325]: 2025-12-03 19:01:21.650 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:01:21 compute-0 nova_compute[348325]: 2025-12-03 19:01:21.651 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:01:21 compute-0 nova_compute[348325]: 2025-12-03 19:01:21.662 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 19:01:21 compute-0 nova_compute[348325]: 2025-12-03 19:01:21.663 348329 INFO nova.compute.claims [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 19:01:21 compute-0 nova_compute[348325]: 2025-12-03 19:01:21.826 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:01:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:01:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:01:22 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3602228773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.337 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.352 348329 DEBUG nova.compute.provider_tree [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.396 348329 DEBUG nova.scheduler.client.report [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.471 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.473 348329 DEBUG nova.compute.manager [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.546 348329 DEBUG nova.compute.manager [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.547 348329 DEBUG nova.network.neutron [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.574 348329 INFO nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.690 348329 DEBUG nova.compute.manager [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.814 348329 DEBUG nova.compute.manager [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.816 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.817 348329 INFO nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Creating image(s)#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.867 348329 DEBUG nova.storage.rbd_utils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.929 348329 DEBUG nova.storage.rbd_utils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:01:22 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.985 348329 DEBUG nova.storage.rbd_utils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:22.999 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.104 348329 DEBUG nova.policy [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8fabb3dd3b1c42b491c99a1274242f68', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '014032eeba1145f99481402acd561743', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.112 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.114 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.116 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.117 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.160 348329 DEBUG nova.storage.rbd_utils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.171 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:01:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:23.358 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:01:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:23.359 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:01:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:23.360 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.479 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.549 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.677 348329 DEBUG nova.storage.rbd_utils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] resizing rbd image 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 19:01:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 157 MiB data, 338 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.904 348329 DEBUG nova.objects.instance [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lazy-loading 'migration_context' on Instance uuid 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.928 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.928 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Ensure instance console log exists: /var/lib/nova/instances/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.929 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.929 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:01:23 compute-0 nova_compute[348325]: 2025-12-03 19:01:23.930 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:01:24 compute-0 nova_compute[348325]: 2025-12-03 19:01:24.109 348329 DEBUG nova.network.neutron [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Successfully created port: 92566cef-01e0-4398-bbab-0b7049af2e6b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 19:01:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:24 compute-0 nova_compute[348325]: 2025-12-03 19:01:24.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:01:24 compute-0 nova_compute[348325]: 2025-12-03 19:01:24.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007578104650973498 of space, bias 1.0, pg target 0.22734313952920493 quantized to 32 (current 32)
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:01:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:01:24 compute-0 podman[449623]: 2025-12-03 19:01:24.963267114 +0000 UTC m=+0.120062162 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, vcs-type=git, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 19:01:24 compute-0 podman[449625]: 2025-12-03 19:01:24.973693579 +0000 UTC m=+0.108054318 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  3 19:01:24 compute-0 podman[449624]: 2025-12-03 19:01:24.985868755 +0000 UTC m=+0.138841059 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 19:01:25 compute-0 nova_compute[348325]: 2025-12-03 19:01:25.298 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:25 compute-0 nova_compute[348325]: 2025-12-03 19:01:25.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:01:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 185 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Dec  3 19:01:26 compute-0 nova_compute[348325]: 2025-12-03 19:01:26.141 348329 DEBUG nova.network.neutron [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Successfully updated port: 92566cef-01e0-4398-bbab-0b7049af2e6b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 19:01:26 compute-0 nova_compute[348325]: 2025-12-03 19:01:26.602 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:01:26 compute-0 nova_compute[348325]: 2025-12-03 19:01:26.602 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquired lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:01:26 compute-0 nova_compute[348325]: 2025-12-03 19:01:26.603 348329 DEBUG nova.network.neutron [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 19:01:26 compute-0 nova_compute[348325]: 2025-12-03 19:01:26.677 348329 DEBUG nova.compute.manager [req-9d2e9fef-9b8e-4016-9681-61dea424464e req-6221414a-90d2-4f9a-8d64-3ac2ace8b476 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Received event network-changed-92566cef-01e0-4398-bbab-0b7049af2e6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:01:26 compute-0 nova_compute[348325]: 2025-12-03 19:01:26.678 348329 DEBUG nova.compute.manager [req-9d2e9fef-9b8e-4016-9681-61dea424464e req-6221414a-90d2-4f9a-8d64-3ac2ace8b476 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Refreshing instance network info cache due to event network-changed-92566cef-01e0-4398-bbab-0b7049af2e6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 19:01:26 compute-0 nova_compute[348325]: 2025-12-03 19:01:26.678 348329 DEBUG oslo_concurrency.lockutils [req-9d2e9fef-9b8e-4016-9681-61dea424464e req-6221414a-90d2-4f9a-8d64-3ac2ace8b476 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:01:27 compute-0 nova_compute[348325]: 2025-12-03 19:01:27.093 348329 DEBUG nova.network.neutron [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 19:01:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 196 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.088 348329 DEBUG nova.network.neutron [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Updating instance_info_cache with network_info: [{"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.113 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Releasing lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.113 348329 DEBUG nova.compute.manager [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Instance network_info: |[{"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.114 348329 DEBUG oslo_concurrency.lockutils [req-9d2e9fef-9b8e-4016-9681-61dea424464e req-6221414a-90d2-4f9a-8d64-3ac2ace8b476 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.114 348329 DEBUG nova.network.neutron [req-9d2e9fef-9b8e-4016-9681-61dea424464e req-6221414a-90d2-4f9a-8d64-3ac2ace8b476 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Refreshing network info cache for port 92566cef-01e0-4398-bbab-0b7049af2e6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.119 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Start _get_guest_xml network_info=[{"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '55982930-937b-484e-96ee-69e406a48023'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.137 348329 WARNING nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.150 348329 DEBUG nova.virt.libvirt.host [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.151 348329 DEBUG nova.virt.libvirt.host [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.157 348329 DEBUG nova.virt.libvirt.host [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.157 348329 DEBUG nova.virt.libvirt.host [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.158 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.158 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:56:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a94cfbfb-a20a-4689-ac91-e7436db75880',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.159 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.159 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.159 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.160 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.160 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.161 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.161 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.162 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.163 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.163 348329 DEBUG nova.virt.hardware [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.166 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.482 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.508 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  3 19:01:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 19:01:28 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145284957' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.660 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.661 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.661 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.662 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.671 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.723 348329 DEBUG nova.storage.rbd_utils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:01:28 compute-0 nova_compute[348325]: 2025-12-03 19:01:28.740 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:01:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:01:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:01:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:01:29 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:01:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:01:29 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:01:29 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3cfe8fed-f519-4696-83de-315bc03c6fae does not exist
Dec  3 19:01:29 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev ea91c9a5-e87a-4f4a-84c2-e4cc16325250 does not exist
Dec  3 19:01:29 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev bfc3d6e6-e812-4cfc-977e-fccb51d74ffa does not exist
Dec  3 19:01:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:01:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:01:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 19:01:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3837511146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 19:01:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:01:29 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:01:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:01:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.213 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.215 348329 DEBUG nova.virt.libvirt.vif [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T19:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-127548925',display_name='tempest-TestNetworkBasicOps-server-127548925',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-127548925',id=13,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI7rF1psYhD8tU9znnWXqUeQ6mnu/Zs10NhYFDe3X+4zojSzevNC27h/7cu/TNcDKRquyHQ51V1La4K+wMiQHVkejByxABkEgQekf3AyU+0qmLU9mTIvdAbamP7MzcryHQ==',key_name='tempest-TestNetworkBasicOps-1544093464',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='014032eeba1145f99481402acd561743',ramdisk_id='',reservation_id='r-xf0avdq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1083905166',owner_user_name='tempest-TestNetworkBasicOps-1083905166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T19:01:22Z,user_data=None,user_id='8fabb3dd3b1c42b491c99a1274242f68',uuid=3bb34e64-ac61-46f3-99eb-2fdd346a8ecc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.216 348329 DEBUG nova.network.os_vif_util [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converting VIF {"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.216 348329 DEBUG nova.network.os_vif_util [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:63:6e,bridge_name='br-int',has_traffic_filtering=True,id=92566cef-01e0-4398-bbab-0b7049af2e6b,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92566cef-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.219 348329 DEBUG nova.objects.instance [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.232 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] End _get_guest_xml xml=<domain type="kvm">
Dec  3 19:01:29 compute-0 nova_compute[348325]:  <uuid>3bb34e64-ac61-46f3-99eb-2fdd346a8ecc</uuid>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  <name>instance-0000000d</name>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  <memory>131072</memory>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  <metadata>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <nova:name>tempest-TestNetworkBasicOps-server-127548925</nova:name>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 19:01:28</nova:creationTime>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <nova:flavor name="m1.nano">
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <nova:memory>128</nova:memory>
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <nova:user uuid="8fabb3dd3b1c42b491c99a1274242f68">tempest-TestNetworkBasicOps-1083905166-project-member</nova:user>
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <nova:project uuid="014032eeba1145f99481402acd561743">tempest-TestNetworkBasicOps-1083905166</nova:project>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="55982930-937b-484e-96ee-69e406a48023"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <nova:port uuid="92566cef-01e0-4398-bbab-0b7049af2e6b">
Dec  3 19:01:29 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  </metadata>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <system>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <entry name="serial">3bb34e64-ac61-46f3-99eb-2fdd346a8ecc</entry>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <entry name="uuid">3bb34e64-ac61-46f3-99eb-2fdd346a8ecc</entry>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    </system>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  <os>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  </os>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  <features>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <apic/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  </features>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  </clock>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  </cpu>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  <devices>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk">
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      </source>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      </auth>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    </disk>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk.config">
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      </source>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 19:01:29 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      </auth>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    </disk>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:ec:63:6e"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <target dev="tap92566cef-01"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    </interface>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/console.log" append="off"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    </serial>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <video>
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    </video>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    </rng>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 19:01:29 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 19:01:29 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 19:01:29 compute-0 nova_compute[348325]:  </devices>
Dec  3 19:01:29 compute-0 nova_compute[348325]: </domain>
Dec  3 19:01:29 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.232 348329 DEBUG nova.compute.manager [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Preparing to wait for external event network-vif-plugged-92566cef-01e0-4398-bbab-0b7049af2e6b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.232 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.233 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.233 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.234 348329 DEBUG nova.virt.libvirt.vif [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T19:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-127548925',display_name='tempest-TestNetworkBasicOps-server-127548925',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-127548925',id=13,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI7rF1psYhD8tU9znnWXqUeQ6mnu/Zs10NhYFDe3X+4zojSzevNC27h/7cu/TNcDKRquyHQ51V1La4K+wMiQHVkejByxABkEgQekf3AyU+0qmLU9mTIvdAbamP7MzcryHQ==',key_name='tempest-TestNetworkBasicOps-1544093464',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='014032eeba1145f99481402acd561743',ramdisk_id='',reservation_id='r-xf0avdq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1083905166',owner_user_name='tempest-TestNetworkBasicOps-1083905166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T19:01:22Z,user_data=None,user_id='8fabb3dd3b1c42b491c99a1274242f68',uuid=3bb34e64-ac61-46f3-99eb-2fdd346a8ecc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.234 348329 DEBUG nova.network.os_vif_util [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converting VIF {"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.235 348329 DEBUG nova.network.os_vif_util [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:63:6e,bridge_name='br-int',has_traffic_filtering=True,id=92566cef-01e0-4398-bbab-0b7049af2e6b,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92566cef-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.236 348329 DEBUG os_vif [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:63:6e,bridge_name='br-int',has_traffic_filtering=True,id=92566cef-01e0-4398-bbab-0b7049af2e6b,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92566cef-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.236 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.237 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.237 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.242 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.242 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92566cef-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.242 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92566cef-01, col_values=(('external_ids', {'iface-id': '92566cef-01e0-4398-bbab-0b7049af2e6b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:63:6e', 'vm-uuid': '3bb34e64-ac61-46f3-99eb-2fdd346a8ecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.244 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:29 compute-0 NetworkManager[49087]: <info>  [1764788489.2457] manager: (tap92566cef-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.246 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.252 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.253 348329 INFO os_vif [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:63:6e,bridge_name='br-int',has_traffic_filtering=True,id=92566cef-01e0-4398-bbab-0b7049af2e6b,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92566cef-01')#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.295 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.296 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.296 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] No VIF found with MAC fa:16:3e:ec:63:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.297 348329 INFO nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Using config drive#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.329 348329 DEBUG nova.storage.rbd_utils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.349 348329 DEBUG nova.network.neutron [req-9d2e9fef-9b8e-4016-9681-61dea424464e req-6221414a-90d2-4f9a-8d64-3ac2ace8b476 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Updated VIF entry in instance network info cache for port 92566cef-01e0-4398-bbab-0b7049af2e6b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.349 348329 DEBUG nova.network.neutron [req-9d2e9fef-9b8e-4016-9681-61dea424464e req-6221414a-90d2-4f9a-8d64-3ac2ace8b476 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Updating instance_info_cache with network_info: [{"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.368 348329 DEBUG oslo_concurrency.lockutils [req-9d2e9fef-9b8e-4016-9681-61dea424464e req-6221414a-90d2-4f9a-8d64-3ac2ace8b476 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:01:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.650 348329 INFO nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Creating config drive at /var/lib/nova/instances/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.config#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.657 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu42ff96o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:01:29 compute-0 podman[158200]: time="2025-12-03T19:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:01:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:01:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8649 "" "Go-http-client/1.1"
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.801 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu42ff96o" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.839 348329 DEBUG nova.storage.rbd_utils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:01:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  3 19:01:29 compute-0 nova_compute[348325]: 2025-12-03 19:01:29.851 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.config 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:01:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:01:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:01:30 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.114 348329 DEBUG oslo_concurrency.processutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.config 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.115 348329 INFO nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Deleting local config drive /var/lib/nova/instances/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.config because it was imported into RBD.#033[00m
Dec  3 19:01:30 compute-0 podman[450066]: 2025-12-03 19:01:30.132908733 +0000 UTC m=+0.088775687 container create c10bddadbc054062b519e1f80518b105db395100a479c03a3ed488d236f5ef76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 19:01:30 compute-0 podman[450066]: 2025-12-03 19:01:30.096766462 +0000 UTC m=+0.052633496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:01:30 compute-0 systemd[1]: Started libpod-conmon-c10bddadbc054062b519e1f80518b105db395100a479c03a3ed488d236f5ef76.scope.
Dec  3 19:01:30 compute-0 kernel: tap92566cef-01: entered promiscuous mode
Dec  3 19:01:30 compute-0 NetworkManager[49087]: <info>  [1764788490.2110] manager: (tap92566cef-01): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.213 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:30 compute-0 ovn_controller[89305]: 2025-12-03T19:01:30Z|00153|binding|INFO|Claiming lport 92566cef-01e0-4398-bbab-0b7049af2e6b for this chassis.
Dec  3 19:01:30 compute-0 ovn_controller[89305]: 2025-12-03T19:01:30Z|00154|binding|INFO|92566cef-01e0-4398-bbab-0b7049af2e6b: Claiming fa:16:3e:ec:63:6e 10.100.0.8
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.224 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.229 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:63:6e 10.100.0.8'], port_security=['fa:16:3e:ec:63:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3bb34e64-ac61-46f3-99eb-2fdd346a8ecc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9057d7e-a146-4d5d-b454-162ed672215e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '014032eeba1145f99481402acd561743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '91f53025-8b3a-4fbb-a061-2ee6f0cf5b08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f5d93a-4b7a-44a9-a795-c197381d4f0f, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=92566cef-01e0-4398-bbab-0b7049af2e6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.230 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 92566cef-01e0-4398-bbab-0b7049af2e6b in datapath d9057d7e-a146-4d5d-b454-162ed672215e bound to our chassis#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.239 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9057d7e-a146-4d5d-b454-162ed672215e#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.254 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[486312d4-1b28-461d-8d70-9eb0963a394f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.255 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9057d7e-a1 in ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.259 411759 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9057d7e-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.259 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d797f5-1c83-4d80-935d-3478781516ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.260 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[68caadeb-65fd-4687-b2dd-f8a0918e1f96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:01:30 compute-0 systemd-udevd[450098]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.276 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[abe00c1b-0629-471b-900b-ca24b2be6b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 systemd-machined[138702]: New machine qemu-14-instance-0000000d.
Dec  3 19:01:30 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Dec  3 19:01:30 compute-0 NetworkManager[49087]: <info>  [1764788490.2905] device (tap92566cef-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 19:01:30 compute-0 NetworkManager[49087]: <info>  [1764788490.2913] device (tap92566cef-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 19:01:30 compute-0 podman[450066]: 2025-12-03 19:01:30.291238219 +0000 UTC m=+0.247105193 container init c10bddadbc054062b519e1f80518b105db395100a479c03a3ed488d236f5ef76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_galois, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:01:30 compute-0 podman[450066]: 2025-12-03 19:01:30.303472207 +0000 UTC m=+0.259339151 container start c10bddadbc054062b519e1f80518b105db395100a479c03a3ed488d236f5ef76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:01:30 compute-0 podman[450066]: 2025-12-03 19:01:30.307926635 +0000 UTC m=+0.263793609 container attach c10bddadbc054062b519e1f80518b105db395100a479c03a3ed488d236f5ef76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_galois, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:01:30 compute-0 nervous_galois[450090]: 167 167
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.310 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:30 compute-0 ovn_controller[89305]: 2025-12-03T19:01:30Z|00155|binding|INFO|Setting lport 92566cef-01e0-4398-bbab-0b7049af2e6b ovn-installed in OVS
Dec  3 19:01:30 compute-0 ovn_controller[89305]: 2025-12-03T19:01:30Z|00156|binding|INFO|Setting lport 92566cef-01e0-4398-bbab-0b7049af2e6b up in Southbound
Dec  3 19:01:30 compute-0 systemd[1]: libpod-c10bddadbc054062b519e1f80518b105db395100a479c03a3ed488d236f5ef76.scope: Deactivated successfully.
Dec  3 19:01:30 compute-0 podman[450066]: 2025-12-03 19:01:30.314697101 +0000 UTC m=+0.270564045 container died c10bddadbc054062b519e1f80518b105db395100a479c03a3ed488d236f5ef76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_galois, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.314 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.325 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[6b76c1fd-f910-424b-90b4-b3e8087d4a95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac135a8c952df4985b13b2772f2725282e9fd749a1886dca6142836fc4075d5c-merged.mount: Deactivated successfully.
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.363 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[16774a03-43a4-4373-9325-bea46bae5ccb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.369 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[c338471e-dc20-4811-8b18-4d7e2d1aeaef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 NetworkManager[49087]: <info>  [1764788490.3706] manager: (tapd9057d7e-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/73)
Dec  3 19:01:30 compute-0 podman[450066]: 2025-12-03 19:01:30.379830701 +0000 UTC m=+0.335697645 container remove c10bddadbc054062b519e1f80518b105db395100a479c03a3ed488d236f5ef76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_galois, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.402 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[9c714537-7eee-4b4e-8ac6-9e56d7cb4325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.406 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[30f134e1-8f65-45e4-9fa8-81f6344f3df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 systemd[1]: libpod-conmon-c10bddadbc054062b519e1f80518b105db395100a479c03a3ed488d236f5ef76.scope: Deactivated successfully.
Dec  3 19:01:30 compute-0 NetworkManager[49087]: <info>  [1764788490.4278] device (tapd9057d7e-a0): carrier: link connected
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.432 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[42c06686-bfbb-4054-ade0-85a57f810c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.448 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[c94f883c-edcb-4564-b0d4-9a9c464e1e22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9057d7e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:6f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677036, 'reachable_time': 35272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450146, 'error': None, 'target': 'ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.463 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[725f7f64-9272-4b43-909b-a1f5c87880ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:6f76'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677036, 'tstamp': 677036}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450147, 'error': None, 'target': 'ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.484 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb25e03-0ae3-4fc6-bd8f-2dd52d4ea722]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9057d7e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:6f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677036, 'reachable_time': 35272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 450148, 'error': None, 'target': 'ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.514 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[d108e149-458b-4944-b065-33c5d1440976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 podman[450156]: 2025-12-03 19:01:30.566294002 +0000 UTC m=+0.045530503 container create 0ee042371c392d6365584a5ac133fdd23ade011d2b6df562833a1933b65cb842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jones, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.569 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[87639880-4069-40c9-8f5b-b3416123e0fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.571 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9057d7e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.572 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.573 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9057d7e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.575 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:30 compute-0 NetworkManager[49087]: <info>  [1764788490.5760] manager: (tapd9057d7e-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Dec  3 19:01:30 compute-0 kernel: tapd9057d7e-a0: entered promiscuous mode
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.579 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.582 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9057d7e-a0, col_values=(('external_ids', {'iface-id': '61129b3d-6cea-46e4-9162-185c7245839a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.584 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:30 compute-0 ovn_controller[89305]: 2025-12-03T19:01:30Z|00157|binding|INFO|Releasing lport 61129b3d-6cea-46e4-9162-185c7245839a from this chassis (sb_readonly=0)
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.585 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.586 286999 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9057d7e-a146-4d5d-b454-162ed672215e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9057d7e-a146-4d5d-b454-162ed672215e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.587 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[8385ac0b-c98b-485d-b056-a4a85095717d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.588 286999 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: global
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    log         /dev/log local0 debug
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    log-tag     haproxy-metadata-proxy-d9057d7e-a146-4d5d-b454-162ed672215e
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    user        root
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    group       root
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    maxconn     1024
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    pidfile     /var/lib/neutron/external/pids/d9057d7e-a146-4d5d-b454-162ed672215e.pid.haproxy
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    daemon
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: defaults
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    log global
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    mode http
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    option httplog
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    option dontlognull
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    option http-server-close
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    option forwardfor
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    retries                 3
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    timeout http-request    30s
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    timeout connect         30s
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    timeout client          32s
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    timeout server          32s
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    timeout http-keep-alive 30s
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: listen listener
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    bind 169.254.169.254:80
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]:    http-request add-header X-OVN-Network-ID d9057d7e-a146-4d5d-b454-162ed672215e
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 19:01:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:30.588 286999 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e', 'env', 'PROCESS_TAG=haproxy-d9057d7e-a146-4d5d-b454-162ed672215e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9057d7e-a146-4d5d-b454-162ed672215e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.597 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:30 compute-0 systemd[1]: Started libpod-conmon-0ee042371c392d6365584a5ac133fdd23ade011d2b6df562833a1933b65cb842.scope.
Dec  3 19:01:30 compute-0 podman[450156]: 2025-12-03 19:01:30.548049326 +0000 UTC m=+0.027285847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:01:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905824d31457e5f0ad0204b1e83d45cc917e0ccddae1f7295655f265ce031b96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905824d31457e5f0ad0204b1e83d45cc917e0ccddae1f7295655f265ce031b96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905824d31457e5f0ad0204b1e83d45cc917e0ccddae1f7295655f265ce031b96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905824d31457e5f0ad0204b1e83d45cc917e0ccddae1f7295655f265ce031b96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905824d31457e5f0ad0204b1e83d45cc917e0ccddae1f7295655f265ce031b96/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:30 compute-0 podman[450156]: 2025-12-03 19:01:30.687258404 +0000 UTC m=+0.166494935 container init 0ee042371c392d6365584a5ac133fdd23ade011d2b6df562833a1933b65cb842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jones, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 19:01:30 compute-0 podman[450156]: 2025-12-03 19:01:30.703168123 +0000 UTC m=+0.182404624 container start 0ee042371c392d6365584a5ac133fdd23ade011d2b6df562833a1933b65cb842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:01:30 compute-0 podman[450156]: 2025-12-03 19:01:30.707426587 +0000 UTC m=+0.186663088 container attach 0ee042371c392d6365584a5ac133fdd23ade011d2b6df562833a1933b65cb842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jones, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.950 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788490.9497626, 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.952 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] VM Started (Lifecycle Event)#033[00m
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.981 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.987 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788490.9513566, 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:01:30 compute-0 nova_compute[348325]: 2025-12-03 19:01:30.988 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] VM Paused (Lifecycle Event)#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.005 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.009 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 19:01:31 compute-0 podman[450244]: 2025-12-03 19:01:31.019052532 +0000 UTC m=+0.089930816 container create 8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.030 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 19:01:31 compute-0 podman[450244]: 2025-12-03 19:01:30.977008576 +0000 UTC m=+0.047886880 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 19:01:31 compute-0 systemd[1]: Started libpod-conmon-8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094.scope.
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.086 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updating instance_info_cache with network_info: [{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.100 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.100 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:01:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:01:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2e43ff164973a8cce8744ccec369895addb289e3194db889ab9f9aef58cf13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:31 compute-0 podman[450244]: 2025-12-03 19:01:31.137914523 +0000 UTC m=+0.208792827 container init 8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:01:31 compute-0 podman[450244]: 2025-12-03 19:01:31.145284164 +0000 UTC m=+0.216162448 container start 8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:01:31 compute-0 neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e[450260]: [NOTICE]   (450264) : New worker (450266) forked
Dec  3 19:01:31 compute-0 neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e[450260]: [NOTICE]   (450264) : Loading success.
Dec  3 19:01:31 compute-0 openstack_network_exporter[365222]: ERROR   19:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:01:31 compute-0 openstack_network_exporter[365222]: ERROR   19:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:01:31 compute-0 openstack_network_exporter[365222]: ERROR   19:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:01:31 compute-0 openstack_network_exporter[365222]: ERROR   19:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:01:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:01:31 compute-0 openstack_network_exporter[365222]: ERROR   19:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:01:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:01:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.856 348329 DEBUG nova.compute.manager [req-0083a908-1703-438b-897d-3607bde67c06 req-16846e53-c05f-4c3d-87a4-fd87ce03d9ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Received event network-vif-plugged-92566cef-01e0-4398-bbab-0b7049af2e6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.857 348329 DEBUG oslo_concurrency.lockutils [req-0083a908-1703-438b-897d-3607bde67c06 req-16846e53-c05f-4c3d-87a4-fd87ce03d9ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.858 348329 DEBUG oslo_concurrency.lockutils [req-0083a908-1703-438b-897d-3607bde67c06 req-16846e53-c05f-4c3d-87a4-fd87ce03d9ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.859 348329 DEBUG oslo_concurrency.lockutils [req-0083a908-1703-438b-897d-3607bde67c06 req-16846e53-c05f-4c3d-87a4-fd87ce03d9ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.859 348329 DEBUG nova.compute.manager [req-0083a908-1703-438b-897d-3607bde67c06 req-16846e53-c05f-4c3d-87a4-fd87ce03d9ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Processing event network-vif-plugged-92566cef-01e0-4398-bbab-0b7049af2e6b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.861 348329 DEBUG nova.compute.manager [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.871 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.873 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788491.8712347, 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:01:31 compute-0 elastic_jones[450177]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:01:31 compute-0 elastic_jones[450177]: --> relative data size: 1.0
Dec  3 19:01:31 compute-0 elastic_jones[450177]: --> All data devices are unavailable
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.874 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] VM Resumed (Lifecycle Event)#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.896 348329 INFO nova.virt.libvirt.driver [-] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Instance spawned successfully.#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.901 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.904 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.915 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 19:01:31 compute-0 systemd[1]: libpod-0ee042371c392d6365584a5ac133fdd23ade011d2b6df562833a1933b65cb842.scope: Deactivated successfully.
Dec  3 19:01:31 compute-0 systemd[1]: libpod-0ee042371c392d6365584a5ac133fdd23ade011d2b6df562833a1933b65cb842.scope: Consumed 1.125s CPU time.
Dec  3 19:01:31 compute-0 conmon[450177]: conmon 0ee042371c392d636558 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ee042371c392d6365584a5ac133fdd23ade011d2b6df562833a1933b65cb842.scope/container/memory.events
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.941 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.942 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.943 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.944 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.945 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.947 348329 DEBUG nova.virt.libvirt.driver [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:01:31 compute-0 nova_compute[348325]: 2025-12-03 19:01:31.952 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 19:01:32 compute-0 podman[450299]: 2025-12-03 19:01:32.001699017 +0000 UTC m=+0.055361092 container died 0ee042371c392d6365584a5ac133fdd23ade011d2b6df562833a1933b65cb842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jones, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 19:01:32 compute-0 nova_compute[348325]: 2025-12-03 19:01:32.003 348329 INFO nova.compute.manager [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Took 9.19 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 19:01:32 compute-0 nova_compute[348325]: 2025-12-03 19:01:32.004 348329 DEBUG nova.compute.manager [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:01:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-905824d31457e5f0ad0204b1e83d45cc917e0ccddae1f7295655f265ce031b96-merged.mount: Deactivated successfully.
Dec  3 19:01:32 compute-0 podman[450299]: 2025-12-03 19:01:32.082193002 +0000 UTC m=+0.135855067 container remove 0ee042371c392d6365584a5ac133fdd23ade011d2b6df562833a1933b65cb842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 19:01:32 compute-0 nova_compute[348325]: 2025-12-03 19:01:32.086 348329 INFO nova.compute.manager [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Took 10.47 seconds to build instance.#033[00m
Dec  3 19:01:32 compute-0 systemd[1]: libpod-conmon-0ee042371c392d6365584a5ac133fdd23ade011d2b6df562833a1933b65cb842.scope: Deactivated successfully.
Dec  3 19:01:32 compute-0 nova_compute[348325]: 2025-12-03 19:01:32.105 348329 DEBUG oslo_concurrency.lockutils [None req-35430560-b2b5-48b2-b5fa-e2275e792603 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:01:32 compute-0 podman[450452]: 2025-12-03 19:01:32.8478511 +0000 UTC m=+0.043471103 container create 231e91dbfeced613d366a8b5ea3a18eb55f173f4b282b56e428d782afdec4ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lumiere, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 19:01:32 compute-0 systemd[1]: Started libpod-conmon-231e91dbfeced613d366a8b5ea3a18eb55f173f4b282b56e428d782afdec4ab4.scope.
Dec  3 19:01:32 compute-0 podman[450452]: 2025-12-03 19:01:32.830516036 +0000 UTC m=+0.026136059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:01:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:01:32 compute-0 podman[450452]: 2025-12-03 19:01:32.951287884 +0000 UTC m=+0.146907897 container init 231e91dbfeced613d366a8b5ea3a18eb55f173f4b282b56e428d782afdec4ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:01:32 compute-0 podman[450452]: 2025-12-03 19:01:32.958850969 +0000 UTC m=+0.154470972 container start 231e91dbfeced613d366a8b5ea3a18eb55f173f4b282b56e428d782afdec4ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 19:01:32 compute-0 silly_lumiere[450468]: 167 167
Dec  3 19:01:32 compute-0 podman[450452]: 2025-12-03 19:01:32.96384552 +0000 UTC m=+0.159465543 container attach 231e91dbfeced613d366a8b5ea3a18eb55f173f4b282b56e428d782afdec4ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 19:01:32 compute-0 systemd[1]: libpod-231e91dbfeced613d366a8b5ea3a18eb55f173f4b282b56e428d782afdec4ab4.scope: Deactivated successfully.
Dec  3 19:01:32 compute-0 podman[450452]: 2025-12-03 19:01:32.965227484 +0000 UTC m=+0.160847487 container died 231e91dbfeced613d366a8b5ea3a18eb55f173f4b282b56e428d782afdec4ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-00fc5635ebce4144285eb4037321d149ef0b93edc36746f2804c3d6155122411-merged.mount: Deactivated successfully.
Dec  3 19:01:33 compute-0 podman[450452]: 2025-12-03 19:01:33.024298057 +0000 UTC m=+0.219918060 container remove 231e91dbfeced613d366a8b5ea3a18eb55f173f4b282b56e428d782afdec4ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 19:01:33 compute-0 systemd[1]: libpod-conmon-231e91dbfeced613d366a8b5ea3a18eb55f173f4b282b56e428d782afdec4ab4.scope: Deactivated successfully.
Dec  3 19:01:33 compute-0 podman[450491]: 2025-12-03 19:01:33.272422733 +0000 UTC m=+0.086590055 container create 160078e0b9f326beee73be9d1ae9ee748551e14fee671467994ecbe3e64fda44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brahmagupta, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:01:33 compute-0 podman[450491]: 2025-12-03 19:01:33.239886859 +0000 UTC m=+0.054054211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:01:33 compute-0 systemd[1]: Started libpod-conmon-160078e0b9f326beee73be9d1ae9ee748551e14fee671467994ecbe3e64fda44.scope.
Dec  3 19:01:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14afb7de248802f12d4471185a036dc63ebf37a77c5cabdf3a1f0353e59b6103/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14afb7de248802f12d4471185a036dc63ebf37a77c5cabdf3a1f0353e59b6103/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14afb7de248802f12d4471185a036dc63ebf37a77c5cabdf3a1f0353e59b6103/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14afb7de248802f12d4471185a036dc63ebf37a77c5cabdf3a1f0353e59b6103/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:33 compute-0 podman[450491]: 2025-12-03 19:01:33.398070699 +0000 UTC m=+0.212238021 container init 160078e0b9f326beee73be9d1ae9ee748551e14fee671467994ecbe3e64fda44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brahmagupta, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:01:33 compute-0 podman[450491]: 2025-12-03 19:01:33.421500651 +0000 UTC m=+0.235667973 container start 160078e0b9f326beee73be9d1ae9ee748551e14fee671467994ecbe3e64fda44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 19:01:33 compute-0 podman[450491]: 2025-12-03 19:01:33.425845427 +0000 UTC m=+0.240012739 container attach 160078e0b9f326beee73be9d1ae9ee748551e14fee671467994ecbe3e64fda44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 19:01:33 compute-0 nova_compute[348325]: 2025-12-03 19:01:33.485 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec  3 19:01:33 compute-0 nova_compute[348325]: 2025-12-03 19:01:33.938 348329 DEBUG nova.compute.manager [req-41d92b6b-8dfd-47ce-a88a-5210fc210aad req-ac1ed9df-3412-42bd-a250-a06a3e01a520 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Received event network-vif-plugged-92566cef-01e0-4398-bbab-0b7049af2e6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:01:33 compute-0 nova_compute[348325]: 2025-12-03 19:01:33.939 348329 DEBUG oslo_concurrency.lockutils [req-41d92b6b-8dfd-47ce-a88a-5210fc210aad req-ac1ed9df-3412-42bd-a250-a06a3e01a520 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:01:33 compute-0 nova_compute[348325]: 2025-12-03 19:01:33.939 348329 DEBUG oslo_concurrency.lockutils [req-41d92b6b-8dfd-47ce-a88a-5210fc210aad req-ac1ed9df-3412-42bd-a250-a06a3e01a520 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:01:33 compute-0 nova_compute[348325]: 2025-12-03 19:01:33.939 348329 DEBUG oslo_concurrency.lockutils [req-41d92b6b-8dfd-47ce-a88a-5210fc210aad req-ac1ed9df-3412-42bd-a250-a06a3e01a520 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:01:33 compute-0 nova_compute[348325]: 2025-12-03 19:01:33.940 348329 DEBUG nova.compute.manager [req-41d92b6b-8dfd-47ce-a88a-5210fc210aad req-ac1ed9df-3412-42bd-a250-a06a3e01a520 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] No waiting events found dispatching network-vif-plugged-92566cef-01e0-4398-bbab-0b7049af2e6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:01:33 compute-0 nova_compute[348325]: 2025-12-03 19:01:33.940 348329 WARNING nova.compute.manager [req-41d92b6b-8dfd-47ce-a88a-5210fc210aad req-ac1ed9df-3412-42bd-a250-a06a3e01a520 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Received unexpected event network-vif-plugged-92566cef-01e0-4398-bbab-0b7049af2e6b for instance with vm_state active and task_state None.#033[00m
Dec  3 19:01:34 compute-0 nova_compute[348325]: 2025-12-03 19:01:34.246 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]: {
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:    "0": [
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:        {
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "devices": [
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "/dev/loop3"
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            ],
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_name": "ceph_lv0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_size": "21470642176",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "name": "ceph_lv0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "tags": {
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.cluster_name": "ceph",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.crush_device_class": "",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.encrypted": "0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.osd_id": "0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.type": "block",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.vdo": "0"
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            },
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "type": "block",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "vg_name": "ceph_vg0"
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:        }
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:    ],
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:    "1": [
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:        {
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "devices": [
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "/dev/loop4"
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            ],
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_name": "ceph_lv1",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_size": "21470642176",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "name": "ceph_lv1",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "tags": {
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.cluster_name": "ceph",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.crush_device_class": "",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.encrypted": "0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.osd_id": "1",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.type": "block",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.vdo": "0"
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            },
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "type": "block",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "vg_name": "ceph_vg1"
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:        }
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:    ],
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:    "2": [
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:        {
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "devices": [
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "/dev/loop5"
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            ],
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_name": "ceph_lv2",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_size": "21470642176",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "name": "ceph_lv2",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "tags": {
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.cluster_name": "ceph",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.crush_device_class": "",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.encrypted": "0",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.osd_id": "2",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.type": "block",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:                "ceph.vdo": "0"
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            },
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "type": "block",
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:            "vg_name": "ceph_vg2"
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:        }
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]:    ]
Dec  3 19:01:34 compute-0 sharp_brahmagupta[450507]: }
Dec  3 19:01:34 compute-0 systemd[1]: libpod-160078e0b9f326beee73be9d1ae9ee748551e14fee671467994ecbe3e64fda44.scope: Deactivated successfully.
Dec  3 19:01:34 compute-0 conmon[450507]: conmon 160078e0b9f326beee73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-160078e0b9f326beee73be9d1ae9ee748551e14fee671467994ecbe3e64fda44.scope/container/memory.events
Dec  3 19:01:34 compute-0 podman[450491]: 2025-12-03 19:01:34.304746739 +0000 UTC m=+1.118914061 container died 160078e0b9f326beee73be9d1ae9ee748551e14fee671467994ecbe3e64fda44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brahmagupta, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-14afb7de248802f12d4471185a036dc63ebf37a77c5cabdf3a1f0353e59b6103-merged.mount: Deactivated successfully.
Dec  3 19:01:34 compute-0 podman[450491]: 2025-12-03 19:01:34.367764257 +0000 UTC m=+1.181931569 container remove 160078e0b9f326beee73be9d1ae9ee748551e14fee671467994ecbe3e64fda44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 19:01:34 compute-0 systemd[1]: libpod-conmon-160078e0b9f326beee73be9d1ae9ee748551e14fee671467994ecbe3e64fda44.scope: Deactivated successfully.
Dec  3 19:01:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:35 compute-0 NetworkManager[49087]: <info>  [1764788495.0222] manager: (patch-br-int-to-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Dec  3 19:01:35 compute-0 NetworkManager[49087]: <info>  [1764788495.0233] manager: (patch-provnet-54c4f4a3-bc4d-431f-a4cd-85ec1868fe98-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Dec  3 19:01:35 compute-0 nova_compute[348325]: 2025-12-03 19:01:35.029 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:35 compute-0 nova_compute[348325]: 2025-12-03 19:01:35.173 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:35 compute-0 ovn_controller[89305]: 2025-12-03T19:01:35Z|00158|binding|INFO|Releasing lport f82febe8-1e88-4e67-9f7a-5af5921c9877 from this chassis (sb_readonly=0)
Dec  3 19:01:35 compute-0 ovn_controller[89305]: 2025-12-03T19:01:35Z|00159|binding|INFO|Releasing lport 61129b3d-6cea-46e4-9162-185c7245839a from this chassis (sb_readonly=0)
Dec  3 19:01:35 compute-0 nova_compute[348325]: 2025-12-03 19:01:35.194 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:35 compute-0 podman[450668]: 2025-12-03 19:01:35.282378041 +0000 UTC m=+0.053542278 container create 1f938f5b986f1308a547cc0608c0e41ac9f4aedfdcc73bd7dd818731e1a4a509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 19:01:35 compute-0 systemd[1]: Started libpod-conmon-1f938f5b986f1308a547cc0608c0e41ac9f4aedfdcc73bd7dd818731e1a4a509.scope.
Dec  3 19:01:35 compute-0 podman[450668]: 2025-12-03 19:01:35.261911852 +0000 UTC m=+0.033076099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:01:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:01:35 compute-0 podman[450668]: 2025-12-03 19:01:35.388939132 +0000 UTC m=+0.160103409 container init 1f938f5b986f1308a547cc0608c0e41ac9f4aedfdcc73bd7dd818731e1a4a509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 19:01:35 compute-0 podman[450668]: 2025-12-03 19:01:35.398116416 +0000 UTC m=+0.169280663 container start 1f938f5b986f1308a547cc0608c0e41ac9f4aedfdcc73bd7dd818731e1a4a509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:01:35 compute-0 podman[450668]: 2025-12-03 19:01:35.403303853 +0000 UTC m=+0.174468140 container attach 1f938f5b986f1308a547cc0608c0e41ac9f4aedfdcc73bd7dd818731e1a4a509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 19:01:35 compute-0 podman[450682]: 2025-12-03 19:01:35.404076061 +0000 UTC m=+0.082175516 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 19:01:35 compute-0 happy_darwin[450692]: 167 167
Dec  3 19:01:35 compute-0 systemd[1]: libpod-1f938f5b986f1308a547cc0608c0e41ac9f4aedfdcc73bd7dd818731e1a4a509.scope: Deactivated successfully.
Dec  3 19:01:35 compute-0 podman[450668]: 2025-12-03 19:01:35.411948334 +0000 UTC m=+0.183112591 container died 1f938f5b986f1308a547cc0608c0e41ac9f4aedfdcc73bd7dd818731e1a4a509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-efe572d197b8ba7d6fb88c580ca0f1bc8fc7c411adfaff59ee644fc6d1e1da52-merged.mount: Deactivated successfully.
Dec  3 19:01:35 compute-0 podman[450668]: 2025-12-03 19:01:35.47777065 +0000 UTC m=+0.248934897 container remove 1f938f5b986f1308a547cc0608c0e41ac9f4aedfdcc73bd7dd818731e1a4a509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Dec  3 19:01:35 compute-0 nova_compute[348325]: 2025-12-03 19:01:35.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:01:35 compute-0 systemd[1]: libpod-conmon-1f938f5b986f1308a547cc0608c0e41ac9f4aedfdcc73bd7dd818731e1a4a509.scope: Deactivated successfully.
Dec  3 19:01:35 compute-0 nova_compute[348325]: 2025-12-03 19:01:35.513 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:01:35 compute-0 nova_compute[348325]: 2025-12-03 19:01:35.513 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:01:35 compute-0 nova_compute[348325]: 2025-12-03 19:01:35.513 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:01:35 compute-0 nova_compute[348325]: 2025-12-03 19:01:35.514 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:01:35 compute-0 nova_compute[348325]: 2025-12-03 19:01:35.514 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:01:35 compute-0 podman[450735]: 2025-12-03 19:01:35.721022938 +0000 UTC m=+0.086409320 container create 9bcd6f49942bb4af1119d8fd08f01c235d3c44e6564dced43f3a85514b68431c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 19:01:35 compute-0 podman[450735]: 2025-12-03 19:01:35.680624342 +0000 UTC m=+0.046010834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:01:35 compute-0 systemd[1]: Started libpod-conmon-9bcd6f49942bb4af1119d8fd08f01c235d3c44e6564dced43f3a85514b68431c.scope.
Dec  3 19:01:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1df1d07467d52dbee865d982da56734e2f478e775e4a36822f5bd88e7df35f18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1df1d07467d52dbee865d982da56734e2f478e775e4a36822f5bd88e7df35f18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1df1d07467d52dbee865d982da56734e2f478e775e4a36822f5bd88e7df35f18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1df1d07467d52dbee865d982da56734e2f478e775e4a36822f5bd88e7df35f18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:01:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Dec  3 19:01:35 compute-0 podman[450735]: 2025-12-03 19:01:35.887950102 +0000 UTC m=+0.253336484 container init 9bcd6f49942bb4af1119d8fd08f01c235d3c44e6564dced43f3a85514b68431c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 19:01:35 compute-0 podman[450735]: 2025-12-03 19:01:35.897108205 +0000 UTC m=+0.262494607 container start 9bcd6f49942bb4af1119d8fd08f01c235d3c44e6564dced43f3a85514b68431c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 19:01:35 compute-0 podman[450735]: 2025-12-03 19:01:35.902854425 +0000 UTC m=+0.268240837 container attach 9bcd6f49942bb4af1119d8fd08f01c235d3c44e6564dced43f3a85514b68431c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 19:01:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:01:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094485400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:01:35 compute-0 nova_compute[348325]: 2025-12-03 19:01:35.966 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.071 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.071 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.077 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.077 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.414 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.415 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3608MB free_disk=59.92192459106445GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.415 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.415 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.501 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.501 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.501 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.501 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:01:36 compute-0 nova_compute[348325]: 2025-12-03 19:01:36.548 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]: {
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "osd_id": 1,
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "type": "bluestore"
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:    },
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "osd_id": 2,
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "type": "bluestore"
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:    },
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "osd_id": 0,
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:        "type": "bluestore"
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]:    }
Dec  3 19:01:36 compute-0 jolly_bardeen[450765]: }
Dec  3 19:01:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:01:36 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2749303476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:01:37 compute-0 systemd[1]: libpod-9bcd6f49942bb4af1119d8fd08f01c235d3c44e6564dced43f3a85514b68431c.scope: Deactivated successfully.
Dec  3 19:01:37 compute-0 systemd[1]: libpod-9bcd6f49942bb4af1119d8fd08f01c235d3c44e6564dced43f3a85514b68431c.scope: Consumed 1.101s CPU time.
Dec  3 19:01:37 compute-0 podman[450735]: 2025-12-03 19:01:37.026904371 +0000 UTC m=+1.392290793 container died 9bcd6f49942bb4af1119d8fd08f01c235d3c44e6564dced43f3a85514b68431c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:01:37 compute-0 nova_compute[348325]: 2025-12-03 19:01:37.029 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:01:37 compute-0 nova_compute[348325]: 2025-12-03 19:01:37.052 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1df1d07467d52dbee865d982da56734e2f478e775e4a36822f5bd88e7df35f18-merged.mount: Deactivated successfully.
Dec  3 19:01:37 compute-0 nova_compute[348325]: 2025-12-03 19:01:37.093 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:01:37 compute-0 podman[450735]: 2025-12-03 19:01:37.111987348 +0000 UTC m=+1.477373730 container remove 9bcd6f49942bb4af1119d8fd08f01c235d3c44e6564dced43f3a85514b68431c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 19:01:37 compute-0 systemd[1]: libpod-conmon-9bcd6f49942bb4af1119d8fd08f01c235d3c44e6564dced43f3a85514b68431c.scope: Deactivated successfully.
Dec  3 19:01:37 compute-0 nova_compute[348325]: 2025-12-03 19:01:37.125 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:01:37 compute-0 nova_compute[348325]: 2025-12-03 19:01:37.125 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:01:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:01:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:01:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:01:37 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:01:37 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 0e8826c5-d1d8-4695-87de-52afa0790b88 does not exist
Dec  3 19:01:37 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9574f8c9-ebb6-4c71-93a9-a44a54e40b17 does not exist
Dec  3 19:01:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:37.332 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:01:37 compute-0 nova_compute[348325]: 2025-12-03 19:01:37.333 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:37 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:37.334 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 19:01:37 compute-0 nova_compute[348325]: 2025-12-03 19:01:37.371 348329 DEBUG nova.compute.manager [req-74660519-5340-418e-8203-f2c790d4b227 req-5b68a50e-5e27-49ac-8fa2-c327966c8344 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Received event network-changed-92566cef-01e0-4398-bbab-0b7049af2e6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:01:37 compute-0 nova_compute[348325]: 2025-12-03 19:01:37.372 348329 DEBUG nova.compute.manager [req-74660519-5340-418e-8203-f2c790d4b227 req-5b68a50e-5e27-49ac-8fa2-c327966c8344 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Refreshing instance network info cache due to event network-changed-92566cef-01e0-4398-bbab-0b7049af2e6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 19:01:37 compute-0 nova_compute[348325]: 2025-12-03 19:01:37.372 348329 DEBUG oslo_concurrency.lockutils [req-74660519-5340-418e-8203-f2c790d4b227 req-5b68a50e-5e27-49ac-8fa2-c327966c8344 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:01:37 compute-0 nova_compute[348325]: 2025-12-03 19:01:37.372 348329 DEBUG oslo_concurrency.lockutils [req-74660519-5340-418e-8203-f2c790d4b227 req-5b68a50e-5e27-49ac-8fa2-c327966c8344 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:01:37 compute-0 nova_compute[348325]: 2025-12-03 19:01:37.372 348329 DEBUG nova.network.neutron [req-74660519-5340-418e-8203-f2c790d4b227 req-5b68a50e-5e27-49ac-8fa2-c327966c8344 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Refreshing network info cache for port 92566cef-01e0-4398-bbab-0b7049af2e6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 19:01:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:01:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/203636333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:01:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:01:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/203636333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:01:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 379 KiB/s wr, 63 op/s
Dec  3 19:01:37 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:01:37 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:01:38 compute-0 nova_compute[348325]: 2025-12-03 19:01:38.488 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:39 compute-0 nova_compute[348325]: 2025-12-03 19:01:39.249 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:39 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:01:39.338 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:01:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.491384) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788499491524, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 723, "num_deletes": 250, "total_data_size": 907304, "memory_usage": 920680, "flush_reason": "Manual Compaction"}
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788499504211, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 899910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38050, "largest_seqno": 38772, "table_properties": {"data_size": 896096, "index_size": 1594, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 7405, "raw_average_key_size": 16, "raw_value_size": 888659, "raw_average_value_size": 2033, "num_data_blocks": 71, "num_entries": 437, "num_filter_entries": 437, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764788440, "oldest_key_time": 1764788440, "file_creation_time": 1764788499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 12903 microseconds, and 6700 cpu microseconds.
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.504288) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 899910 bytes OK
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.504310) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.506876) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.506901) EVENT_LOG_v1 {"time_micros": 1764788499506892, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.506924) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 903577, prev total WAL file size 903577, number of live WAL files 2.
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.508482) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(878KB)], [86(10096KB)]
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788499508547, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11238355, "oldest_snapshot_seqno": -1}
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 5736 keys, 10518706 bytes, temperature: kUnknown
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788499581305, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10518706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10477887, "index_size": 25349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 147475, "raw_average_key_size": 25, "raw_value_size": 10371682, "raw_average_value_size": 1808, "num_data_blocks": 1025, "num_entries": 5736, "num_filter_entries": 5736, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764788499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.581544) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10518706 bytes
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.583008) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.4 rd, 144.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.9 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(24.2) write-amplify(11.7) OK, records in: 6247, records dropped: 511 output_compression: NoCompression
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.583022) EVENT_LOG_v1 {"time_micros": 1764788499583015, "job": 50, "event": "compaction_finished", "compaction_time_micros": 72807, "compaction_time_cpu_micros": 25297, "output_level": 6, "num_output_files": 1, "total_output_size": 10518706, "num_input_records": 6247, "num_output_records": 5736, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788499583250, "job": 50, "event": "table_file_deletion", "file_number": 88}
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788499584691, "job": 50, "event": "table_file_deletion", "file_number": 86}
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.508253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.584825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.584831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.584832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.584834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:01:39 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:01:39.584835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:01:39 compute-0 nova_compute[348325]: 2025-12-03 19:01:39.772 348329 DEBUG nova.network.neutron [req-74660519-5340-418e-8203-f2c790d4b227 req-5b68a50e-5e27-49ac-8fa2-c327966c8344 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Updated VIF entry in instance network info cache for port 92566cef-01e0-4398-bbab-0b7049af2e6b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 19:01:39 compute-0 nova_compute[348325]: 2025-12-03 19:01:39.773 348329 DEBUG nova.network.neutron [req-74660519-5340-418e-8203-f2c790d4b227 req-5b68a50e-5e27-49ac-8fa2-c327966c8344 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Updating instance_info_cache with network_info: [{"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:01:39 compute-0 nova_compute[348325]: 2025-12-03 19:01:39.801 348329 DEBUG oslo_concurrency.lockutils [req-74660519-5340-418e-8203-f2c790d4b227 req-5b68a50e-5e27-49ac-8fa2-c327966c8344 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:01:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 232 KiB/s wr, 70 op/s
Dec  3 19:01:39 compute-0 podman[450885]: 2025-12-03 19:01:39.976294989 +0000 UTC m=+0.134634956 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4)
Dec  3 19:01:39 compute-0 podman[450884]: 2025-12-03 19:01:39.990346332 +0000 UTC m=+0.142456838 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  3 19:01:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  3 19:01:43 compute-0 nova_compute[348325]: 2025-12-03 19:01:43.491 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  3 19:01:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:01:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:01:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:01:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:01:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:01:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:01:44 compute-0 nova_compute[348325]: 2025-12-03 19:01:44.252 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 70 op/s
Dec  3 19:01:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 939 KiB/s rd, 30 op/s
Dec  3 19:01:48 compute-0 nova_compute[348325]: 2025-12-03 19:01:48.494 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:49 compute-0 nova_compute[348325]: 2025-12-03 19:01:49.255 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 776 KiB/s rd, 24 op/s
Dec  3 19:01:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 542 KiB/s rd, 17 op/s
Dec  3 19:01:51 compute-0 podman[450932]: 2025-12-03 19:01:51.934490962 +0000 UTC m=+0.090444608 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  3 19:01:51 compute-0 podman[450934]: 2025-12-03 19:01:51.938659074 +0000 UTC m=+0.088808189 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Dec  3 19:01:51 compute-0 podman[450933]: 2025-12-03 19:01:51.962395533 +0000 UTC m=+0.121536227 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 19:01:53 compute-0 nova_compute[348325]: 2025-12-03 19:01:53.497 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:01:54 compute-0 nova_compute[348325]: 2025-12-03 19:01:54.260 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:01:55 compute-0 podman[450991]: 2025-12-03 19:01:55.9458602 +0000 UTC m=+0.095071830 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, com.redhat.component=ubi9-container, container_name=kepler, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 19:01:55 compute-0 podman[450993]: 2025-12-03 19:01:55.956747206 +0000 UTC m=+0.107759061 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 19:01:55 compute-0 podman[450992]: 2025-12-03 19:01:55.956778787 +0000 UTC m=+0.116686779 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 19:01:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:01:58 compute-0 nova_compute[348325]: 2025-12-03 19:01:58.500 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:59 compute-0 nova_compute[348325]: 2025-12-03 19:01:59.264 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:01:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:01:59 compute-0 podman[158200]: time="2025-12-03T19:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:01:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45044 "" "Go-http-client/1.1"
Dec  3 19:01:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9106 "" "Go-http-client/1.1"
Dec  3 19:01:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:02:01 compute-0 openstack_network_exporter[365222]: ERROR   19:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:02:01 compute-0 openstack_network_exporter[365222]: ERROR   19:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:02:01 compute-0 openstack_network_exporter[365222]: ERROR   19:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:02:01 compute-0 openstack_network_exporter[365222]: ERROR   19:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:02:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:02:01 compute-0 openstack_network_exporter[365222]: ERROR   19:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:02:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:02:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:02:03 compute-0 nova_compute[348325]: 2025-12-03 19:02:03.503 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:02:04 compute-0 nova_compute[348325]: 2025-12-03 19:02:04.267 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:05 compute-0 ovn_controller[89305]: 2025-12-03T19:02:05Z|00160|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  3 19:02:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:02:05 compute-0 podman[451040]: 2025-12-03 19:02:05.964291129 +0000 UTC m=+0.125251539 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:02:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 op/s
Dec  3 19:02:08 compute-0 nova_compute[348325]: 2025-12-03 19:02:08.505 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:09 compute-0 nova_compute[348325]: 2025-12-03 19:02:09.281 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:09 compute-0 ovn_controller[89305]: 2025-12-03T19:02:09Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:63:6e 10.100.0.8
Dec  3 19:02:09 compute-0 ovn_controller[89305]: 2025-12-03T19:02:09Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:63:6e 10.100.0.8
Dec  3 19:02:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 213 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 692 KiB/s wr, 11 op/s
Dec  3 19:02:10 compute-0 podman[451065]: 2025-12-03 19:02:10.998001751 +0000 UTC m=+0.150600308 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 19:02:11 compute-0 podman[451066]: 2025-12-03 19:02:11.007074502 +0000 UTC m=+0.155879896 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 19:02:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 233 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec  3 19:02:13 compute-0 nova_compute[348325]: 2025-12-03 19:02:13.508 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 236 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  3 19:02:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:02:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:02:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:02:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:02:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:02:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:02:14
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'images', '.rgw.root', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data']
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:02:14 compute-0 nova_compute[348325]: 2025-12-03 19:02:14.284 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.506036) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788534506070, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 502, "num_deletes": 250, "total_data_size": 515839, "memory_usage": 525200, "flush_reason": "Manual Compaction"}
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788534511282, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 353480, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38773, "largest_seqno": 39274, "table_properties": {"data_size": 350881, "index_size": 635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6781, "raw_average_key_size": 20, "raw_value_size": 345718, "raw_average_value_size": 1031, "num_data_blocks": 29, "num_entries": 335, "num_filter_entries": 335, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764788500, "oldest_key_time": 1764788500, "file_creation_time": 1764788534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 5469 microseconds, and 1784 cpu microseconds.
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.511503) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 353480 bytes OK
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.511519) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.513763) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.513777) EVENT_LOG_v1 {"time_micros": 1764788534513772, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.513791) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 512939, prev total WAL file size 512939, number of live WAL files 2.
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.514396) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353033' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(345KB)], [89(10MB)]
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788534514542, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 10872186, "oldest_snapshot_seqno": -1}
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:02:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 5575 keys, 7741720 bytes, temperature: kUnknown
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788534577635, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 7741720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7706440, "index_size": 20258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 144287, "raw_average_key_size": 25, "raw_value_size": 7607426, "raw_average_value_size": 1364, "num_data_blocks": 814, "num_entries": 5575, "num_filter_entries": 5575, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764788534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.578167) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 7741720 bytes
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.580620) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.3 rd, 122.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(52.7) write-amplify(21.9) OK, records in: 6071, records dropped: 496 output_compression: NoCompression
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.580643) EVENT_LOG_v1 {"time_micros": 1764788534580633, "job": 52, "event": "compaction_finished", "compaction_time_micros": 63460, "compaction_time_cpu_micros": 29387, "output_level": 6, "num_output_files": 1, "total_output_size": 7741720, "num_input_records": 6071, "num_output_records": 5575, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788534582080, "job": 52, "event": "table_file_deletion", "file_number": 91}
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788534585241, "job": 52, "event": "table_file_deletion", "file_number": 89}
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.514227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.586050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.586056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.586058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.586060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:14.586062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 236 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  3 19:02:16 compute-0 nova_compute[348325]: 2025-12-03 19:02:16.387 348329 INFO nova.compute.manager [None req-86682e1d-b376-41f7-8b83-2e17c73f5315 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Get console output#033[00m
Dec  3 19:02:16 compute-0 nova_compute[348325]: 2025-12-03 19:02:16.396 348329 INFO oslo.privsep.daemon [None req-86682e1d-b376-41f7-8b83-2e17c73f5315 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp9ces3b8c/privsep.sock']#033[00m
Dec  3 19:02:17 compute-0 nova_compute[348325]: 2025-12-03 19:02:17.216 348329 INFO oslo.privsep.daemon [None req-86682e1d-b376-41f7-8b83-2e17c73f5315 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 19:02:17 compute-0 nova_compute[348325]: 2025-12-03 19:02:17.106 451114 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 19:02:17 compute-0 nova_compute[348325]: 2025-12-03 19:02:17.113 451114 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 19:02:17 compute-0 nova_compute[348325]: 2025-12-03 19:02:17.118 451114 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  3 19:02:17 compute-0 nova_compute[348325]: 2025-12-03 19:02:17.118 451114 INFO oslo.privsep.daemon [-] privsep daemon running as pid 451114#033[00m
Dec  3 19:02:17 compute-0 nova_compute[348325]: 2025-12-03 19:02:17.313 451114 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  3 19:02:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 236 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  3 19:02:18 compute-0 nova_compute[348325]: 2025-12-03 19:02:18.117 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:02:18 compute-0 nova_compute[348325]: 2025-12-03 19:02:18.510 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:19 compute-0 nova_compute[348325]: 2025-12-03 19:02:19.288 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:19 compute-0 nova_compute[348325]: 2025-12-03 19:02:19.498 348329 DEBUG nova.compute.manager [req-82e3cb67-09cb-44a1-a535-24a11ef68948 req-1ee48bb8-465f-47d0-84a6-f36f94ec11a0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Received event network-changed-92566cef-01e0-4398-bbab-0b7049af2e6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:02:19 compute-0 nova_compute[348325]: 2025-12-03 19:02:19.499 348329 DEBUG nova.compute.manager [req-82e3cb67-09cb-44a1-a535-24a11ef68948 req-1ee48bb8-465f-47d0-84a6-f36f94ec11a0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Refreshing instance network info cache due to event network-changed-92566cef-01e0-4398-bbab-0b7049af2e6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 19:02:19 compute-0 nova_compute[348325]: 2025-12-03 19:02:19.499 348329 DEBUG oslo_concurrency.lockutils [req-82e3cb67-09cb-44a1-a535-24a11ef68948 req-1ee48bb8-465f-47d0-84a6-f36f94ec11a0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:02:19 compute-0 nova_compute[348325]: 2025-12-03 19:02:19.500 348329 DEBUG oslo_concurrency.lockutils [req-82e3cb67-09cb-44a1-a535-24a11ef68948 req-1ee48bb8-465f-47d0-84a6-f36f94ec11a0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:02:19 compute-0 nova_compute[348325]: 2025-12-03 19:02:19.500 348329 DEBUG nova.network.neutron [req-82e3cb67-09cb-44a1-a535-24a11ef68948 req-1ee48bb8-465f-47d0-84a6-f36f94ec11a0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Refreshing network info cache for port 92566cef-01e0-4398-bbab-0b7049af2e6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 19:02:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 236 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  3 19:02:21 compute-0 nova_compute[348325]: 2025-12-03 19:02:21.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:02:21 compute-0 nova_compute[348325]: 2025-12-03 19:02:21.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:02:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 236 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 1.5 MiB/s wr, 52 op/s
Dec  3 19:02:22 compute-0 podman[451122]: 2025-12-03 19:02:22.904686427 +0000 UTC m=+0.072784648 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  3 19:02:22 compute-0 podman[451124]: 2025-12-03 19:02:22.905213639 +0000 UTC m=+0.071467495 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 19:02:22 compute-0 podman[451125]: 2025-12-03 19:02:22.954257386 +0000 UTC m=+0.115014458 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, version=9.6)
Dec  3 19:02:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:23.215 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:02:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:23.217 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 19:02:23 compute-0 nova_compute[348325]: 2025-12-03 19:02:23.218 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:23.358 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:02:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:23.359 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:02:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:23.360 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:02:23 compute-0 nova_compute[348325]: 2025-12-03 19:02:23.440 348329 DEBUG nova.network.neutron [req-82e3cb67-09cb-44a1-a535-24a11ef68948 req-1ee48bb8-465f-47d0-84a6-f36f94ec11a0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Updated VIF entry in instance network info cache for port 92566cef-01e0-4398-bbab-0b7049af2e6b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 19:02:23 compute-0 nova_compute[348325]: 2025-12-03 19:02:23.441 348329 DEBUG nova.network.neutron [req-82e3cb67-09cb-44a1-a535-24a11ef68948 req-1ee48bb8-465f-47d0-84a6-f36f94ec11a0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Updating instance_info_cache with network_info: [{"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:02:23 compute-0 nova_compute[348325]: 2025-12-03 19:02:23.464 348329 DEBUG oslo_concurrency.lockutils [req-82e3cb67-09cb-44a1-a535-24a11ef68948 req-1ee48bb8-465f-47d0-84a6-f36f94ec11a0 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:02:23 compute-0 nova_compute[348325]: 2025-12-03 19:02:23.514 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 236 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 26 KiB/s wr, 3 op/s
Dec  3 19:02:24 compute-0 nova_compute[348325]: 2025-12-03 19:02:24.290 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:24 compute-0 nova_compute[348325]: 2025-12-03 19:02:24.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:02:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001516765562935469 of space, bias 1.0, pg target 0.4550296688806407 quantized to 32 (current 32)
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:02:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:02:25 compute-0 nova_compute[348325]: 2025-12-03 19:02:25.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:02:25 compute-0 nova_compute[348325]: 2025-12-03 19:02:25.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:02:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 236 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec  3 19:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:02:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8645 writes, 39K keys, 8645 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s#012Cumulative WAL: 8645 writes, 8645 syncs, 1.00 writes per sync, written: 0.05 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1360 writes, 6636 keys, 1360 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s#012Interval WAL: 1360 writes, 1360 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    100.5      0.48              0.23        26    0.019       0      0       0.0       0.0#012  L6      1/0    7.38 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9    121.6    100.1      1.91              0.74        25    0.076    129K    13K       0.0       0.0#012 Sum      1/0    7.38 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     97.1    100.2      2.39              0.97        51    0.047    129K    13K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6    128.3    126.2      0.55              0.26        14    0.039     42K   3596       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    121.6    100.1      1.91              0.74        25    0.076    129K    13K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    102.0      0.47              0.23        25    0.019       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.047, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.07 MB/s write, 0.23 GB read, 0.06 MB/s read, 2.4 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55911062f1f0#2 capacity: 304.00 MB usage: 25.35 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000145 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1633,24.37 MB,8.01722%) FilterBlock(52,368.23 KB,0.118291%) IndexBlock(52,630.11 KB,0.202415%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 19:02:27 compute-0 podman[451191]: 2025-12-03 19:02:26.999864128 +0000 UTC m=+0.128567308 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  3 19:02:27 compute-0 podman[451190]: 2025-12-03 19:02:27.000265737 +0000 UTC m=+0.138631843 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:02:27 compute-0 podman[451189]: 2025-12-03 19:02:27.015349206 +0000 UTC m=+0.163577863 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, container_name=kepler, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.openshift.expose-services=, name=ubi9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Dec  3 19:02:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 236 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 5.0 KiB/s wr, 1 op/s
Dec  3 19:02:28 compute-0 nova_compute[348325]: 2025-12-03 19:02:28.517 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:29 compute-0 nova_compute[348325]: 2025-12-03 19:02:29.295 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:29 compute-0 nova_compute[348325]: 2025-12-03 19:02:29.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:02:29 compute-0 nova_compute[348325]: 2025-12-03 19:02:29.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:02:29 compute-0 nova_compute[348325]: 2025-12-03 19:02:29.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:02:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:29 compute-0 nova_compute[348325]: 2025-12-03 19:02:29.716 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:02:29 compute-0 nova_compute[348325]: 2025-12-03 19:02:29.717 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:02:29 compute-0 nova_compute[348325]: 2025-12-03 19:02:29.718 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:02:29 compute-0 nova_compute[348325]: 2025-12-03 19:02:29.719 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:02:29 compute-0 podman[158200]: time="2025-12-03T19:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:02:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45044 "" "Go-http-client/1.1"
Dec  3 19:02:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9122 "" "Go-http-client/1.1"
Dec  3 19:02:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 236 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Dec  3 19:02:30 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:30.220 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.243 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "fd1bf28c-ce00-44df-b134-5fa073e2246d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.244 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.267 348329 DEBUG nova.compute.manager [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.378 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.380 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.393 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.394 348329 INFO nova.compute.claims [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.485 348329 DEBUG nova.scheduler.client.report [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Refreshing inventories for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.516 348329 DEBUG nova.scheduler.client.report [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Updating ProviderTree inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.518 348329 DEBUG nova.compute.provider_tree [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.543 348329 DEBUG nova.scheduler.client.report [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Refreshing aggregate associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.573 348329 DEBUG nova.scheduler.client.report [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Refreshing trait associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, traits: COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_AVX2,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 19:02:30 compute-0 nova_compute[348325]: 2025-12-03 19:02:30.671 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:02:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:02:31 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918784040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.159 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.173 348329 DEBUG nova.compute.provider_tree [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.200 348329 DEBUG nova.scheduler.client.report [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.238 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.239 348329 DEBUG nova.compute.manager [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.311 348329 DEBUG nova.compute.manager [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.311 348329 DEBUG nova.network.neutron [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.333 348329 INFO nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.355 348329 DEBUG nova.compute.manager [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 19:02:31 compute-0 openstack_network_exporter[365222]: ERROR   19:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:02:31 compute-0 openstack_network_exporter[365222]: ERROR   19:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:02:31 compute-0 openstack_network_exporter[365222]: ERROR   19:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:02:31 compute-0 openstack_network_exporter[365222]: ERROR   19:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:02:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:02:31 compute-0 openstack_network_exporter[365222]: ERROR   19:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:02:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.478 348329 DEBUG nova.compute.manager [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.480 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.482 348329 INFO nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Creating image(s)#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.540 348329 DEBUG nova.storage.rbd_utils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image fd1bf28c-ce00-44df-b134-5fa073e2246d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.601 348329 DEBUG nova.storage.rbd_utils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image fd1bf28c-ce00-44df-b134-5fa073e2246d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.653 348329 DEBUG nova.storage.rbd_utils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image fd1bf28c-ce00-44df-b134-5fa073e2246d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.662 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.691 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updating instance_info_cache with network_info: [{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.695 348329 DEBUG nova.policy [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8fabb3dd3b1c42b491c99a1274242f68', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '014032eeba1145f99481402acd561743', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.713 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.713 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.747 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.748 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.749 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.749 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "5cd3db9bb272569bd3ad2bd1318028e61915b864" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.797 348329 DEBUG nova.storage.rbd_utils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image fd1bf28c-ce00-44df-b134-5fa073e2246d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:02:31 compute-0 nova_compute[348325]: 2025-12-03 19:02:31.821 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 fd1bf28c-ce00-44df-b134-5fa073e2246d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:02:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 236 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:31.985625) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788551985740, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 389, "num_deletes": 251, "total_data_size": 265339, "memory_usage": 272648, "flush_reason": "Manual Compaction"}
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788551992039, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 262974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39275, "largest_seqno": 39663, "table_properties": {"data_size": 260643, "index_size": 495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5699, "raw_average_key_size": 18, "raw_value_size": 256057, "raw_average_value_size": 834, "num_data_blocks": 22, "num_entries": 307, "num_filter_entries": 307, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764788534, "oldest_key_time": 1764788534, "file_creation_time": 1764788551, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 6535 microseconds, and 2983 cpu microseconds.
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:31.992153) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 262974 bytes OK
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:31.992182) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:31.995916) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:31.995942) EVENT_LOG_v1 {"time_micros": 1764788551995936, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:31.995961) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 262845, prev total WAL file size 262845, number of live WAL files 2.
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:31.996750) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(256KB)], [92(7560KB)]
Dec  3 19:02:31 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788551996805, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 8004694, "oldest_snapshot_seqno": -1}
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 5373 keys, 6280461 bytes, temperature: kUnknown
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788552049689, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 6280461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6247980, "index_size": 17971, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 140767, "raw_average_key_size": 26, "raw_value_size": 6153845, "raw_average_value_size": 1145, "num_data_blocks": 709, "num_entries": 5373, "num_filter_entries": 5373, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764788551, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:32.050058) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 6280461 bytes
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:32.052179) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.6 rd, 118.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 7.4 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(54.3) write-amplify(23.9) OK, records in: 5882, records dropped: 509 output_compression: NoCompression
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:32.052202) EVENT_LOG_v1 {"time_micros": 1764788552052188, "job": 54, "event": "compaction_finished", "compaction_time_micros": 53135, "compaction_time_cpu_micros": 21360, "output_level": 6, "num_output_files": 1, "total_output_size": 6280461, "num_input_records": 5882, "num_output_records": 5373, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788552053216, "job": 54, "event": "table_file_deletion", "file_number": 94}
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788552054904, "job": 54, "event": "table_file_deletion", "file_number": 92}
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:31.996563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:32.055282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:32.055287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:32.055289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:32.055290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:32 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:02:32.055292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:02:32 compute-0 nova_compute[348325]: 2025-12-03 19:02:32.253 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864 fd1bf28c-ce00-44df-b134-5fa073e2246d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:02:32 compute-0 nova_compute[348325]: 2025-12-03 19:02:32.381 348329 DEBUG nova.storage.rbd_utils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] resizing rbd image fd1bf28c-ce00-44df-b134-5fa073e2246d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 19:02:32 compute-0 nova_compute[348325]: 2025-12-03 19:02:32.579 348329 DEBUG nova.objects.instance [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lazy-loading 'migration_context' on Instance uuid fd1bf28c-ce00-44df-b134-5fa073e2246d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:02:32 compute-0 nova_compute[348325]: 2025-12-03 19:02:32.597 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 19:02:32 compute-0 nova_compute[348325]: 2025-12-03 19:02:32.598 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Ensure instance console log exists: /var/lib/nova/instances/fd1bf28c-ce00-44df-b134-5fa073e2246d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 19:02:32 compute-0 nova_compute[348325]: 2025-12-03 19:02:32.598 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:02:32 compute-0 nova_compute[348325]: 2025-12-03 19:02:32.601 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:02:32 compute-0 nova_compute[348325]: 2025-12-03 19:02:32.601 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:02:33 compute-0 nova_compute[348325]: 2025-12-03 19:02:33.154 348329 DEBUG nova.network.neutron [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Successfully created port: 25216d9c-b16b-4d38-af2c-044877eecdba _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 19:02:33 compute-0 nova_compute[348325]: 2025-12-03 19:02:33.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:02:33 compute-0 nova_compute[348325]: 2025-12-03 19:02:33.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:02:33 compute-0 nova_compute[348325]: 2025-12-03 19:02:33.519 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 245 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 203 KiB/s wr, 1 op/s
Dec  3 19:02:34 compute-0 nova_compute[348325]: 2025-12-03 19:02:34.245 348329 DEBUG nova.network.neutron [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Successfully updated port: 25216d9c-b16b-4d38-af2c-044877eecdba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 19:02:34 compute-0 nova_compute[348325]: 2025-12-03 19:02:34.260 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "refresh_cache-fd1bf28c-ce00-44df-b134-5fa073e2246d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:02:34 compute-0 nova_compute[348325]: 2025-12-03 19:02:34.260 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquired lock "refresh_cache-fd1bf28c-ce00-44df-b134-5fa073e2246d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:02:34 compute-0 nova_compute[348325]: 2025-12-03 19:02:34.261 348329 DEBUG nova.network.neutron [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 19:02:34 compute-0 nova_compute[348325]: 2025-12-03 19:02:34.298 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:34 compute-0 nova_compute[348325]: 2025-12-03 19:02:34.339 348329 DEBUG nova.compute.manager [req-274933ce-e406-4cf0-ad31-54dd0bd7096e req-6632c699-132c-4162-9a72-c89398b507ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Received event network-changed-25216d9c-b16b-4d38-af2c-044877eecdba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:02:34 compute-0 nova_compute[348325]: 2025-12-03 19:02:34.340 348329 DEBUG nova.compute.manager [req-274933ce-e406-4cf0-ad31-54dd0bd7096e req-6632c699-132c-4162-9a72-c89398b507ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Refreshing instance network info cache due to event network-changed-25216d9c-b16b-4d38-af2c-044877eecdba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 19:02:34 compute-0 nova_compute[348325]: 2025-12-03 19:02:34.340 348329 DEBUG oslo_concurrency.lockutils [req-274933ce-e406-4cf0-ad31-54dd0bd7096e req-6632c699-132c-4162-9a72-c89398b507ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-fd1bf28c-ce00-44df-b134-5fa073e2246d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:02:34 compute-0 nova_compute[348325]: 2025-12-03 19:02:34.489 348329 DEBUG nova.network.neutron [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 19:02:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.348 348329 DEBUG nova.network.neutron [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Updating instance_info_cache with network_info: [{"id": "25216d9c-b16b-4d38-af2c-044877eecdba", "address": "fa:16:3e:7e:eb:8f", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25216d9c-b1", "ovs_interfaceid": "25216d9c-b16b-4d38-af2c-044877eecdba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.593 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Releasing lock "refresh_cache-fd1bf28c-ce00-44df-b134-5fa073e2246d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.594 348329 DEBUG nova.compute.manager [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Instance network_info: |[{"id": "25216d9c-b16b-4d38-af2c-044877eecdba", "address": "fa:16:3e:7e:eb:8f", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25216d9c-b1", "ovs_interfaceid": "25216d9c-b16b-4d38-af2c-044877eecdba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.595 348329 DEBUG oslo_concurrency.lockutils [req-274933ce-e406-4cf0-ad31-54dd0bd7096e req-6632c699-132c-4162-9a72-c89398b507ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-fd1bf28c-ce00-44df-b134-5fa073e2246d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.595 348329 DEBUG nova.network.neutron [req-274933ce-e406-4cf0-ad31-54dd0bd7096e req-6632c699-132c-4162-9a72-c89398b507ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Refreshing network info cache for port 25216d9c-b16b-4d38-af2c-044877eecdba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.598 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Start _get_guest_xml network_info=[{"id": "25216d9c-b16b-4d38-af2c-044877eecdba", "address": "fa:16:3e:7e:eb:8f", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25216d9c-b1", "ovs_interfaceid": "25216d9c-b16b-4d38-af2c-044877eecdba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '55982930-937b-484e-96ee-69e406a48023'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.609 348329 WARNING nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.617 348329 DEBUG nova.virt.libvirt.host [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.617 348329 DEBUG nova.virt.libvirt.host [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.623 348329 DEBUG nova.virt.libvirt.host [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.624 348329 DEBUG nova.virt.libvirt.host [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.626 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.626 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:56:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a94cfbfb-a20a-4689-ac91-e7436db75880',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:56:32Z,direct_url=<?>,disk_format='qcow2',id=55982930-937b-484e-96ee-69e406a48023,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d2770200bdb2436c90142fa2e5ddcd47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:56:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.627 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.628 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.628 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.629 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.629 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.630 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.630 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.631 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.631 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.632 348329 DEBUG nova.virt.hardware [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 19:02:35 compute-0 nova_compute[348325]: 2025-12-03 19:02:35.635 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:02:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 267 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1018 KiB/s wr, 14 op/s
Dec  3 19:02:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 19:02:36 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3830006524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.109 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.147 348329 DEBUG nova.storage.rbd_utils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image fd1bf28c-ce00-44df-b134-5fa073e2246d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.156 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:02:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 19:02:36 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3863172949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.589 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.591 348329 DEBUG nova.virt.libvirt.vif [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T19:02:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-675437755',display_name='tempest-TestNetworkBasicOps-server-675437755',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-675437755',id=14,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBS8ezMjtIG805yQFgF4j/ECOsDiCMS7aCiCqdC+a8KK5knfknL/g2dlwag5/vklyq6F8zAud17+RXTTXhjSnt3CAAq1zM4H8s6/fatjZ3kNM4cGkNnON7zjRpS6sbgnrA==',key_name='tempest-TestNetworkBasicOps-1344632227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='014032eeba1145f99481402acd561743',ramdisk_id='',reservation_id='r-aw0tghxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1083905166',owner_user_name='tempest-TestNetworkBasicOps-1083905166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T19:02:31Z,user_data=None,user_id='8fabb3dd3b1c42b491c99a1274242f68',uuid=fd1bf28c-ce00-44df-b134-5fa073e2246d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25216d9c-b16b-4d38-af2c-044877eecdba", "address": "fa:16:3e:7e:eb:8f", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25216d9c-b1", "ovs_interfaceid": "25216d9c-b16b-4d38-af2c-044877eecdba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.592 348329 DEBUG nova.network.os_vif_util [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converting VIF {"id": "25216d9c-b16b-4d38-af2c-044877eecdba", "address": "fa:16:3e:7e:eb:8f", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25216d9c-b1", "ovs_interfaceid": "25216d9c-b16b-4d38-af2c-044877eecdba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.593 348329 DEBUG nova.network.os_vif_util [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:eb:8f,bridge_name='br-int',has_traffic_filtering=True,id=25216d9c-b16b-4d38-af2c-044877eecdba,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25216d9c-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.594 348329 DEBUG nova.objects.instance [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lazy-loading 'pci_devices' on Instance uuid fd1bf28c-ce00-44df-b134-5fa073e2246d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.617 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] End _get_guest_xml xml=<domain type="kvm">
Dec  3 19:02:36 compute-0 nova_compute[348325]:  <uuid>fd1bf28c-ce00-44df-b134-5fa073e2246d</uuid>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  <name>instance-0000000e</name>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  <memory>131072</memory>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  <metadata>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <nova:name>tempest-TestNetworkBasicOps-server-675437755</nova:name>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 19:02:35</nova:creationTime>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <nova:flavor name="m1.nano">
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <nova:memory>128</nova:memory>
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <nova:user uuid="8fabb3dd3b1c42b491c99a1274242f68">tempest-TestNetworkBasicOps-1083905166-project-member</nova:user>
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <nova:project uuid="014032eeba1145f99481402acd561743">tempest-TestNetworkBasicOps-1083905166</nova:project>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="55982930-937b-484e-96ee-69e406a48023"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <nova:port uuid="25216d9c-b16b-4d38-af2c-044877eecdba">
Dec  3 19:02:36 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  </metadata>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <system>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <entry name="serial">fd1bf28c-ce00-44df-b134-5fa073e2246d</entry>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <entry name="uuid">fd1bf28c-ce00-44df-b134-5fa073e2246d</entry>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    </system>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  <os>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  </os>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  <features>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <apic/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  </features>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  </clock>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  </cpu>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  <devices>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/fd1bf28c-ce00-44df-b134-5fa073e2246d_disk">
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      </source>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      </auth>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    </disk>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/fd1bf28c-ce00-44df-b134-5fa073e2246d_disk.config">
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      </source>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 19:02:36 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      </auth>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    </disk>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:7e:eb:8f"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <target dev="tap25216d9c-b1"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    </interface>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/fd1bf28c-ce00-44df-b134-5fa073e2246d/console.log" append="off"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    </serial>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <video>
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    </video>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    </rng>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 19:02:36 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 19:02:36 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 19:02:36 compute-0 nova_compute[348325]:  </devices>
Dec  3 19:02:36 compute-0 nova_compute[348325]: </domain>
Dec  3 19:02:36 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.619 348329 DEBUG nova.compute.manager [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Preparing to wait for external event network-vif-plugged-25216d9c-b16b-4d38-af2c-044877eecdba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.619 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.619 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.620 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.620 348329 DEBUG nova.virt.libvirt.vif [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T19:02:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-675437755',display_name='tempest-TestNetworkBasicOps-server-675437755',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-675437755',id=14,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBS8ezMjtIG805yQFgF4j/ECOsDiCMS7aCiCqdC+a8KK5knfknL/g2dlwag5/vklyq6F8zAud17+RXTTXhjSnt3CAAq1zM4H8s6/fatjZ3kNM4cGkNnON7zjRpS6sbgnrA==',key_name='tempest-TestNetworkBasicOps-1344632227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='014032eeba1145f99481402acd561743',ramdisk_id='',reservation_id='r-aw0tghxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1083905166',owner_user_name='tempest-TestNetworkBasicOps-1083905166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T19:02:31Z,user_data=None,user_id='8fabb3dd3b1c42b491c99a1274242f68',uuid=fd1bf28c-ce00-44df-b134-5fa073e2246d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25216d9c-b16b-4d38-af2c-044877eecdba", "address": "fa:16:3e:7e:eb:8f", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25216d9c-b1", "ovs_interfaceid": "25216d9c-b16b-4d38-af2c-044877eecdba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.621 348329 DEBUG nova.network.os_vif_util [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converting VIF {"id": "25216d9c-b16b-4d38-af2c-044877eecdba", "address": "fa:16:3e:7e:eb:8f", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25216d9c-b1", "ovs_interfaceid": "25216d9c-b16b-4d38-af2c-044877eecdba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.621 348329 DEBUG nova.network.os_vif_util [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:eb:8f,bridge_name='br-int',has_traffic_filtering=True,id=25216d9c-b16b-4d38-af2c-044877eecdba,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25216d9c-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.622 348329 DEBUG os_vif [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:eb:8f,bridge_name='br-int',has_traffic_filtering=True,id=25216d9c-b16b-4d38-af2c-044877eecdba,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25216d9c-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.623 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.623 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.624 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.627 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.628 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25216d9c-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.628 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25216d9c-b1, col_values=(('external_ids', {'iface-id': '25216d9c-b16b-4d38-af2c-044877eecdba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:eb:8f', 'vm-uuid': 'fd1bf28c-ce00-44df-b134-5fa073e2246d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.630 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:36 compute-0 NetworkManager[49087]: <info>  [1764788556.6315] manager: (tap25216d9c-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.632 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.638 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.639 348329 INFO os_vif [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:eb:8f,bridge_name='br-int',has_traffic_filtering=True,id=25216d9c-b16b-4d38-af2c-044877eecdba,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25216d9c-b1')#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.690 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.691 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.691 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] No VIF found with MAC fa:16:3e:7e:eb:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.692 348329 INFO nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Using config drive#033[00m
Dec  3 19:02:36 compute-0 nova_compute[348325]: 2025-12-03 19:02:36.723 348329 DEBUG nova.storage.rbd_utils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image fd1bf28c-ce00-44df-b134-5fa073e2246d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:02:36 compute-0 podman[451513]: 2025-12-03 19:02:36.995705394 +0000 UTC m=+0.136375110 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 19:02:37 compute-0 nova_compute[348325]: 2025-12-03 19:02:37.537 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:02:37 compute-0 nova_compute[348325]: 2025-12-03 19:02:37.592 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:02:37 compute-0 nova_compute[348325]: 2025-12-03 19:02:37.592 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:02:37 compute-0 nova_compute[348325]: 2025-12-03 19:02:37.593 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:02:37 compute-0 nova_compute[348325]: 2025-12-03 19:02:37.593 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:02:37 compute-0 nova_compute[348325]: 2025-12-03 19:02:37.593 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:02:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:02:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4081857814' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:02:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:02:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4081857814' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:02:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 282 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  3 19:02:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:02:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1798871275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.115 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.158 348329 INFO nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Creating config drive at /var/lib/nova/instances/fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.config#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.166 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfng569lh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.290 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.291 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.293 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfng569lh" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.327 348329 DEBUG nova.storage.rbd_utils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] rbd image fd1bf28c-ce00-44df-b134-5fa073e2246d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.333 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.config fd1bf28c-ce00-44df-b134-5fa073e2246d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.358 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.359 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.363 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.363 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.524 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:02:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:02:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:02:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:02:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:02:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:02:38 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c57021a6-df96-4961-b55c-2eb100b38fbc does not exist
Dec  3 19:02:38 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9f47b49d-ae60-447f-a16a-0ea7d8681476 does not exist
Dec  3 19:02:38 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6f4e2318-d7fc-4714-a693-c8ca37fb3922 does not exist
Dec  3 19:02:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:02:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:02:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:02:38 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:02:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:02:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.611 348329 DEBUG oslo_concurrency.processutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.config fd1bf28c-ce00-44df-b134-5fa073e2246d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.612 348329 INFO nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Deleting local config drive /var/lib/nova/instances/fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.config because it was imported into RBD.#033[00m
Dec  3 19:02:38 compute-0 kernel: tap25216d9c-b1: entered promiscuous mode
Dec  3 19:02:38 compute-0 NetworkManager[49087]: <info>  [1764788558.6772] manager: (tap25216d9c-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.679 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:38 compute-0 ovn_controller[89305]: 2025-12-03T19:02:38Z|00161|binding|INFO|Claiming lport 25216d9c-b16b-4d38-af2c-044877eecdba for this chassis.
Dec  3 19:02:38 compute-0 ovn_controller[89305]: 2025-12-03T19:02:38Z|00162|binding|INFO|25216d9c-b16b-4d38-af2c-044877eecdba: Claiming fa:16:3e:7e:eb:8f 10.100.0.7
Dec  3 19:02:38 compute-0 ovn_controller[89305]: 2025-12-03T19:02:38Z|00163|binding|INFO|Setting lport 25216d9c-b16b-4d38-af2c-044877eecdba ovn-installed in OVS
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.700 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.702 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:38 compute-0 systemd-machined[138702]: New machine qemu-15-instance-0000000e.
Dec  3 19:02:38 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Dec  3 19:02:38 compute-0 systemd-udevd[451789]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 19:02:38 compute-0 NetworkManager[49087]: <info>  [1764788558.7508] device (tap25216d9c-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 19:02:38 compute-0 NetworkManager[49087]: <info>  [1764788558.7627] device (tap25216d9c-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 19:02:38 compute-0 ovn_controller[89305]: 2025-12-03T19:02:38Z|00164|binding|INFO|Setting lport 25216d9c-b16b-4d38-af2c-044877eecdba up in Southbound
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.798 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:eb:8f 10.100.0.7'], port_security=['fa:16:3e:7e:eb:8f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fd1bf28c-ce00-44df-b134-5fa073e2246d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9057d7e-a146-4d5d-b454-162ed672215e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '014032eeba1145f99481402acd561743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ebf03a8-1d0d-487d-a6e4-4f3166db9cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f5d93a-4b7a-44a9-a795-c197381d4f0f, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=25216d9c-b16b-4d38-af2c-044877eecdba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.800 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 25216d9c-b16b-4d38-af2c-044877eecdba in datapath d9057d7e-a146-4d5d-b454-162ed672215e bound to our chassis#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.804 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9057d7e-a146-4d5d-b454-162ed672215e#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.833 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[b51de125-1bd4-47e3-b810-11ed66122777]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.883 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[0e111a36-de68-4310-b012-f7df83dd3889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.888 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d77d85-ab20-40ff-b27c-18bbca5011f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.924 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[23420852-2ca1-40c4-a9df-fbf4757224de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.940 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[d2772ee6-b15b-4792-b406-054531e365bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9057d7e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:6f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677036, 'reachable_time': 35272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 451855, 'error': None, 'target': 'ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.952 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.953 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3508MB free_disk=59.87649154663086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.954 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.954 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.961 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[70cb574f-c5dd-463b-9028-2999ac9d22df]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9057d7e-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677047, 'tstamp': 677047}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451856, 'error': None, 'target': 'ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9057d7e-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677050, 'tstamp': 677050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451856, 'error': None, 'target': 'ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.964 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9057d7e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.965 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:38 compute-0 nova_compute[348325]: 2025-12-03 19:02:38.967 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.969 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9057d7e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.969 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.970 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9057d7e-a0, col_values=(('external_ids', {'iface-id': '61129b3d-6cea-46e4-9162-185c7245839a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:02:38 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:02:38.970 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:02:39 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:02:39 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:02:39 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.047 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.048 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.048 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance fd1bf28c-ce00-44df-b134-5fa073e2246d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.049 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.049 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.190 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:02:39 compute-0 podman[451922]: 2025-12-03 19:02:39.315838584 +0000 UTC m=+0.050936245 container create a4d2f800b34e9f8490254977d6274ff84d095169d6334a030a556a12afc256c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.320 348329 DEBUG nova.compute.manager [req-a46d0aa5-fc1e-4f5b-a416-a1e69412d6d0 req-70dcb535-d1fe-40f8-9dd7-4f02ca1d9a49 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Received event network-vif-plugged-25216d9c-b16b-4d38-af2c-044877eecdba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.320 348329 DEBUG oslo_concurrency.lockutils [req-a46d0aa5-fc1e-4f5b-a416-a1e69412d6d0 req-70dcb535-d1fe-40f8-9dd7-4f02ca1d9a49 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.321 348329 DEBUG oslo_concurrency.lockutils [req-a46d0aa5-fc1e-4f5b-a416-a1e69412d6d0 req-70dcb535-d1fe-40f8-9dd7-4f02ca1d9a49 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.322 348329 DEBUG oslo_concurrency.lockutils [req-a46d0aa5-fc1e-4f5b-a416-a1e69412d6d0 req-70dcb535-d1fe-40f8-9dd7-4f02ca1d9a49 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.322 348329 DEBUG nova.compute.manager [req-a46d0aa5-fc1e-4f5b-a416-a1e69412d6d0 req-70dcb535-d1fe-40f8-9dd7-4f02ca1d9a49 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Processing event network-vif-plugged-25216d9c-b16b-4d38-af2c-044877eecdba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 19:02:39 compute-0 systemd[1]: Started libpod-conmon-a4d2f800b34e9f8490254977d6274ff84d095169d6334a030a556a12afc256c3.scope.
Dec  3 19:02:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:02:39 compute-0 podman[451922]: 2025-12-03 19:02:39.297305561 +0000 UTC m=+0.032403232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:02:39 compute-0 podman[451922]: 2025-12-03 19:02:39.408157947 +0000 UTC m=+0.143255628 container init a4d2f800b34e9f8490254977d6274ff84d095169d6334a030a556a12afc256c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.413 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788559.406308, fd1bf28c-ce00-44df-b134-5fa073e2246d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.413 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] VM Started (Lifecycle Event)#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.415 348329 DEBUG nova.compute.manager [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 19:02:39 compute-0 podman[451922]: 2025-12-03 19:02:39.416796118 +0000 UTC m=+0.151893769 container start a4d2f800b34e9f8490254977d6274ff84d095169d6334a030a556a12afc256c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:02:39 compute-0 podman[451922]: 2025-12-03 19:02:39.421569034 +0000 UTC m=+0.156666685 container attach a4d2f800b34e9f8490254977d6274ff84d095169d6334a030a556a12afc256c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:02:39 compute-0 naughty_ellis[451973]: 167 167
Dec  3 19:02:39 compute-0 systemd[1]: libpod-a4d2f800b34e9f8490254977d6274ff84d095169d6334a030a556a12afc256c3.scope: Deactivated successfully.
Dec  3 19:02:39 compute-0 podman[451922]: 2025-12-03 19:02:39.426332971 +0000 UTC m=+0.161430632 container died a4d2f800b34e9f8490254977d6274ff84d095169d6334a030a556a12afc256c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.430 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.433 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.439 348329 INFO nova.virt.libvirt.driver [-] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Instance spawned successfully.#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.439 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.447 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 19:02:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-35e9ddbc8066c2add679e9741dbfc5e3badb78199241893838a33945d7c0de42-merged.mount: Deactivated successfully.
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.469 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.470 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788559.406397, fd1bf28c-ce00-44df-b134-5fa073e2246d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.470 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] VM Paused (Lifecycle Event)#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.477 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.477 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.478 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.478 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.479 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.479 348329 DEBUG nova.virt.libvirt.driver [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:02:39 compute-0 podman[451922]: 2025-12-03 19:02:39.486729085 +0000 UTC m=+0.221826736 container remove a4d2f800b34e9f8490254977d6274ff84d095169d6334a030a556a12afc256c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:02:39 compute-0 systemd[1]: libpod-conmon-a4d2f800b34e9f8490254977d6274ff84d095169d6334a030a556a12afc256c3.scope: Deactivated successfully.
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.506 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:02:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.519 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788559.4202726, fd1bf28c-ce00-44df-b134-5fa073e2246d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.519 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] VM Resumed (Lifecycle Event)#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.545 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.550 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.570 348329 INFO nova.compute.manager [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Took 8.09 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.570 348329 DEBUG nova.compute.manager [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.586 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.662 348329 INFO nova.compute.manager [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Took 9.34 seconds to build instance.#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.679 348329 DEBUG oslo_concurrency.lockutils [None req-b930a992-1500-45ef-9d6a-73105e877f36 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:02:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:02:39 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1662191745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:02:39 compute-0 podman[451997]: 2025-12-03 19:02:39.689126244 +0000 UTC m=+0.048733430 container create c5a73119f8649ec1545960d900b3c4397a7b5e8ea5ffd610ba601936569a65ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pike, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.708 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.711 348329 DEBUG nova.network.neutron [req-274933ce-e406-4cf0-ad31-54dd0bd7096e req-6632c699-132c-4162-9a72-c89398b507ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Updated VIF entry in instance network info cache for port 25216d9c-b16b-4d38-af2c-044877eecdba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.711 348329 DEBUG nova.network.neutron [req-274933ce-e406-4cf0-ad31-54dd0bd7096e req-6632c699-132c-4162-9a72-c89398b507ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Updating instance_info_cache with network_info: [{"id": "25216d9c-b16b-4d38-af2c-044877eecdba", "address": "fa:16:3e:7e:eb:8f", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25216d9c-b1", "ovs_interfaceid": "25216d9c-b16b-4d38-af2c-044877eecdba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.719 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.724 348329 DEBUG oslo_concurrency.lockutils [req-274933ce-e406-4cf0-ad31-54dd0bd7096e req-6632c699-132c-4162-9a72-c89398b507ca 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-fd1bf28c-ce00-44df-b134-5fa073e2246d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.733 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:02:39 compute-0 systemd[1]: Started libpod-conmon-c5a73119f8649ec1545960d900b3c4397a7b5e8ea5ffd610ba601936569a65ab.scope.
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.754 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:02:39 compute-0 nova_compute[348325]: 2025-12-03 19:02:39.754 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:02:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:02:39 compute-0 podman[451997]: 2025-12-03 19:02:39.669839314 +0000 UTC m=+0.029446500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b185099f1a6e4f8f9bc0e04e92ad72d982fcc42465f9e9667444fe555b26c884/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b185099f1a6e4f8f9bc0e04e92ad72d982fcc42465f9e9667444fe555b26c884/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b185099f1a6e4f8f9bc0e04e92ad72d982fcc42465f9e9667444fe555b26c884/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b185099f1a6e4f8f9bc0e04e92ad72d982fcc42465f9e9667444fe555b26c884/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b185099f1a6e4f8f9bc0e04e92ad72d982fcc42465f9e9667444fe555b26c884/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:39 compute-0 podman[451997]: 2025-12-03 19:02:39.82413418 +0000 UTC m=+0.183741386 container init c5a73119f8649ec1545960d900b3c4397a7b5e8ea5ffd610ba601936569a65ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:02:39 compute-0 podman[451997]: 2025-12-03 19:02:39.833800106 +0000 UTC m=+0.193407292 container start c5a73119f8649ec1545960d900b3c4397a7b5e8ea5ffd610ba601936569a65ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pike, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:02:39 compute-0 podman[451997]: 2025-12-03 19:02:39.838350897 +0000 UTC m=+0.197958113 container attach c5a73119f8649ec1545960d900b3c4397a7b5e8ea5ffd610ba601936569a65ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pike, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 19:02:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 282 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  3 19:02:41 compute-0 funny_pike[452014]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:02:41 compute-0 funny_pike[452014]: --> relative data size: 1.0
Dec  3 19:02:41 compute-0 funny_pike[452014]: --> All data devices are unavailable
Dec  3 19:02:41 compute-0 systemd[1]: libpod-c5a73119f8649ec1545960d900b3c4397a7b5e8ea5ffd610ba601936569a65ab.scope: Deactivated successfully.
Dec  3 19:02:41 compute-0 systemd[1]: libpod-c5a73119f8649ec1545960d900b3c4397a7b5e8ea5ffd610ba601936569a65ab.scope: Consumed 1.162s CPU time.
Dec  3 19:02:41 compute-0 podman[451997]: 2025-12-03 19:02:41.119048596 +0000 UTC m=+1.478655792 container died c5a73119f8649ec1545960d900b3c4397a7b5e8ea5ffd610ba601936569a65ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pike, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 19:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b185099f1a6e4f8f9bc0e04e92ad72d982fcc42465f9e9667444fe555b26c884-merged.mount: Deactivated successfully.
Dec  3 19:02:41 compute-0 podman[451997]: 2025-12-03 19:02:41.196599309 +0000 UTC m=+1.556206495 container remove c5a73119f8649ec1545960d900b3c4397a7b5e8ea5ffd610ba601936569a65ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pike, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:02:41 compute-0 systemd[1]: libpod-conmon-c5a73119f8649ec1545960d900b3c4397a7b5e8ea5ffd610ba601936569a65ab.scope: Deactivated successfully.
Dec  3 19:02:41 compute-0 podman[452053]: 2025-12-03 19:02:41.288069991 +0000 UTC m=+0.126892718 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 19:02:41 compute-0 podman[452045]: 2025-12-03 19:02:41.29089725 +0000 UTC m=+0.129940482 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 19:02:41 compute-0 nova_compute[348325]: 2025-12-03 19:02:41.415 348329 DEBUG nova.compute.manager [req-f34efada-2e02-4b0f-99c8-7ce374ff9c3d req-940e6537-07ad-4133-b0c9-7c03f75ff4e1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Received event network-vif-plugged-25216d9c-b16b-4d38-af2c-044877eecdba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:02:41 compute-0 nova_compute[348325]: 2025-12-03 19:02:41.416 348329 DEBUG oslo_concurrency.lockutils [req-f34efada-2e02-4b0f-99c8-7ce374ff9c3d req-940e6537-07ad-4133-b0c9-7c03f75ff4e1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:02:41 compute-0 nova_compute[348325]: 2025-12-03 19:02:41.416 348329 DEBUG oslo_concurrency.lockutils [req-f34efada-2e02-4b0f-99c8-7ce374ff9c3d req-940e6537-07ad-4133-b0c9-7c03f75ff4e1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:02:41 compute-0 nova_compute[348325]: 2025-12-03 19:02:41.416 348329 DEBUG oslo_concurrency.lockutils [req-f34efada-2e02-4b0f-99c8-7ce374ff9c3d req-940e6537-07ad-4133-b0c9-7c03f75ff4e1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:02:41 compute-0 nova_compute[348325]: 2025-12-03 19:02:41.417 348329 DEBUG nova.compute.manager [req-f34efada-2e02-4b0f-99c8-7ce374ff9c3d req-940e6537-07ad-4133-b0c9-7c03f75ff4e1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] No waiting events found dispatching network-vif-plugged-25216d9c-b16b-4d38-af2c-044877eecdba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:02:41 compute-0 nova_compute[348325]: 2025-12-03 19:02:41.417 348329 WARNING nova.compute.manager [req-f34efada-2e02-4b0f-99c8-7ce374ff9c3d req-940e6537-07ad-4133-b0c9-7c03f75ff4e1 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Received unexpected event network-vif-plugged-25216d9c-b16b-4d38-af2c-044877eecdba for instance with vm_state active and task_state None.#033[00m
Dec  3 19:02:41 compute-0 nova_compute[348325]: 2025-12-03 19:02:41.630 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:41 compute-0 nova_compute[348325]: 2025-12-03 19:02:41.694 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:02:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 282 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec  3 19:02:42 compute-0 podman[452236]: 2025-12-03 19:02:42.041238224 +0000 UTC m=+0.077161474 container create 381fc6c324dee6f96d05bcc402f83f44f82a62d652489af6bbf04650674575c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:02:42 compute-0 podman[452236]: 2025-12-03 19:02:42.004811535 +0000 UTC m=+0.040734835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:02:42 compute-0 systemd[1]: Started libpod-conmon-381fc6c324dee6f96d05bcc402f83f44f82a62d652489af6bbf04650674575c4.scope.
Dec  3 19:02:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:02:42 compute-0 podman[452236]: 2025-12-03 19:02:42.156208331 +0000 UTC m=+0.192131601 container init 381fc6c324dee6f96d05bcc402f83f44f82a62d652489af6bbf04650674575c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 19:02:42 compute-0 podman[452236]: 2025-12-03 19:02:42.168220264 +0000 UTC m=+0.204143514 container start 381fc6c324dee6f96d05bcc402f83f44f82a62d652489af6bbf04650674575c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:02:42 compute-0 podman[452236]: 2025-12-03 19:02:42.172402696 +0000 UTC m=+0.208325946 container attach 381fc6c324dee6f96d05bcc402f83f44f82a62d652489af6bbf04650674575c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 19:02:42 compute-0 reverent_khorana[452252]: 167 167
Dec  3 19:02:42 compute-0 systemd[1]: libpod-381fc6c324dee6f96d05bcc402f83f44f82a62d652489af6bbf04650674575c4.scope: Deactivated successfully.
Dec  3 19:02:42 compute-0 podman[452236]: 2025-12-03 19:02:42.18199709 +0000 UTC m=+0.217920400 container died 381fc6c324dee6f96d05bcc402f83f44f82a62d652489af6bbf04650674575c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 19:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3ff8d27e19f654cf652e175e7ac27c8d31eb30714374f572fb592e0c062158e-merged.mount: Deactivated successfully.
Dec  3 19:02:42 compute-0 podman[452236]: 2025-12-03 19:02:42.238716604 +0000 UTC m=+0.274639854 container remove 381fc6c324dee6f96d05bcc402f83f44f82a62d652489af6bbf04650674575c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 19:02:42 compute-0 systemd[1]: libpod-conmon-381fc6c324dee6f96d05bcc402f83f44f82a62d652489af6bbf04650674575c4.scope: Deactivated successfully.
Dec  3 19:02:42 compute-0 podman[452274]: 2025-12-03 19:02:42.471007084 +0000 UTC m=+0.057458234 container create 9d8cf236854b9c86a8bb6cf924be39414abe486376253707458506c039483b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 19:02:42 compute-0 systemd[1]: Started libpod-conmon-9d8cf236854b9c86a8bb6cf924be39414abe486376253707458506c039483b72.scope.
Dec  3 19:02:42 compute-0 podman[452274]: 2025-12-03 19:02:42.446950617 +0000 UTC m=+0.033401817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:02:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b461d937eabfb46ea75cf965ee332451fadb20bd838fbc0f7b001c6d809718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b461d937eabfb46ea75cf965ee332451fadb20bd838fbc0f7b001c6d809718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b461d937eabfb46ea75cf965ee332451fadb20bd838fbc0f7b001c6d809718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b461d937eabfb46ea75cf965ee332451fadb20bd838fbc0f7b001c6d809718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:42 compute-0 podman[452274]: 2025-12-03 19:02:42.599706665 +0000 UTC m=+0.186157855 container init 9d8cf236854b9c86a8bb6cf924be39414abe486376253707458506c039483b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:02:42 compute-0 podman[452274]: 2025-12-03 19:02:42.612115618 +0000 UTC m=+0.198566778 container start 9d8cf236854b9c86a8bb6cf924be39414abe486376253707458506c039483b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:02:42 compute-0 podman[452274]: 2025-12-03 19:02:42.617060409 +0000 UTC m=+0.203511609 container attach 9d8cf236854b9c86a8bb6cf924be39414abe486376253707458506c039483b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 19:02:43 compute-0 condescending_poitras[452288]: {
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:    "0": [
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:        {
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "devices": [
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "/dev/loop3"
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            ],
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_name": "ceph_lv0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_size": "21470642176",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "name": "ceph_lv0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "tags": {
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.cluster_name": "ceph",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.crush_device_class": "",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.encrypted": "0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.osd_id": "0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.type": "block",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.vdo": "0"
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            },
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "type": "block",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "vg_name": "ceph_vg0"
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:        }
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:    ],
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:    "1": [
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:        {
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "devices": [
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "/dev/loop4"
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            ],
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_name": "ceph_lv1",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_size": "21470642176",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "name": "ceph_lv1",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "tags": {
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.cluster_name": "ceph",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.crush_device_class": "",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.encrypted": "0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.osd_id": "1",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.type": "block",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.vdo": "0"
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            },
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "type": "block",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "vg_name": "ceph_vg1"
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:        }
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:    ],
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:    "2": [
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:        {
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "devices": [
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "/dev/loop5"
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            ],
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_name": "ceph_lv2",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_size": "21470642176",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "name": "ceph_lv2",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "tags": {
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.cluster_name": "ceph",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.crush_device_class": "",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.encrypted": "0",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.osd_id": "2",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.type": "block",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:                "ceph.vdo": "0"
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            },
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "type": "block",
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:            "vg_name": "ceph_vg2"
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:        }
Dec  3 19:02:43 compute-0 condescending_poitras[452288]:    ]
Dec  3 19:02:43 compute-0 condescending_poitras[452288]: }
Dec  3 19:02:43 compute-0 systemd[1]: libpod-9d8cf236854b9c86a8bb6cf924be39414abe486376253707458506c039483b72.scope: Deactivated successfully.
Dec  3 19:02:43 compute-0 nova_compute[348325]: 2025-12-03 19:02:43.526 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:43 compute-0 nova_compute[348325]: 2025-12-03 19:02:43.543 348329 DEBUG nova.compute.manager [req-34c788fb-1ed8-4850-9997-65747a90417c req-ec098438-7062-4010-9d44-9e885574c9ba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Received event network-changed-25216d9c-b16b-4d38-af2c-044877eecdba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:02:43 compute-0 nova_compute[348325]: 2025-12-03 19:02:43.544 348329 DEBUG nova.compute.manager [req-34c788fb-1ed8-4850-9997-65747a90417c req-ec098438-7062-4010-9d44-9e885574c9ba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Refreshing instance network info cache due to event network-changed-25216d9c-b16b-4d38-af2c-044877eecdba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 19:02:43 compute-0 nova_compute[348325]: 2025-12-03 19:02:43.544 348329 DEBUG oslo_concurrency.lockutils [req-34c788fb-1ed8-4850-9997-65747a90417c req-ec098438-7062-4010-9d44-9e885574c9ba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-fd1bf28c-ce00-44df-b134-5fa073e2246d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:02:43 compute-0 nova_compute[348325]: 2025-12-03 19:02:43.545 348329 DEBUG oslo_concurrency.lockutils [req-34c788fb-1ed8-4850-9997-65747a90417c req-ec098438-7062-4010-9d44-9e885574c9ba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-fd1bf28c-ce00-44df-b134-5fa073e2246d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:02:43 compute-0 nova_compute[348325]: 2025-12-03 19:02:43.545 348329 DEBUG nova.network.neutron [req-34c788fb-1ed8-4850-9997-65747a90417c req-ec098438-7062-4010-9d44-9e885574c9ba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Refreshing network info cache for port 25216d9c-b16b-4d38-af2c-044877eecdba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 19:02:43 compute-0 podman[452297]: 2025-12-03 19:02:43.597748845 +0000 UTC m=+0.049382956 container died 9d8cf236854b9c86a8bb6cf924be39414abe486376253707458506c039483b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 19:02:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2b461d937eabfb46ea75cf965ee332451fadb20bd838fbc0f7b001c6d809718-merged.mount: Deactivated successfully.
Dec  3 19:02:43 compute-0 podman[452297]: 2025-12-03 19:02:43.672638754 +0000 UTC m=+0.124272855 container remove 9d8cf236854b9c86a8bb6cf924be39414abe486376253707458506c039483b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_poitras, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 19:02:43 compute-0 systemd[1]: libpod-conmon-9d8cf236854b9c86a8bb6cf924be39414abe486376253707458506c039483b72.scope: Deactivated successfully.
Dec  3 19:02:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 535 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec  3 19:02:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:02:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:02:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:02:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:02:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:02:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:02:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:44 compute-0 podman[452447]: 2025-12-03 19:02:44.612635007 +0000 UTC m=+0.057939815 container create 2f1714141bbcadebac95e0d5fa623559d7cb0ee8d709b3893b58de79b75ff606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_einstein, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:02:44 compute-0 systemd[1]: Started libpod-conmon-2f1714141bbcadebac95e0d5fa623559d7cb0ee8d709b3893b58de79b75ff606.scope.
Dec  3 19:02:44 compute-0 podman[452447]: 2025-12-03 19:02:44.589872991 +0000 UTC m=+0.035177839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:02:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:02:44 compute-0 podman[452447]: 2025-12-03 19:02:44.729526649 +0000 UTC m=+0.174831477 container init 2f1714141bbcadebac95e0d5fa623559d7cb0ee8d709b3893b58de79b75ff606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:02:44 compute-0 podman[452447]: 2025-12-03 19:02:44.744649949 +0000 UTC m=+0.189954757 container start 2f1714141bbcadebac95e0d5fa623559d7cb0ee8d709b3893b58de79b75ff606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:02:44 compute-0 podman[452447]: 2025-12-03 19:02:44.748846841 +0000 UTC m=+0.194151649 container attach 2f1714141bbcadebac95e0d5fa623559d7cb0ee8d709b3893b58de79b75ff606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:02:44 compute-0 vibrant_einstein[452463]: 167 167
Dec  3 19:02:44 compute-0 systemd[1]: libpod-2f1714141bbcadebac95e0d5fa623559d7cb0ee8d709b3893b58de79b75ff606.scope: Deactivated successfully.
Dec  3 19:02:44 compute-0 podman[452447]: 2025-12-03 19:02:44.754142261 +0000 UTC m=+0.199447069 container died 2f1714141bbcadebac95e0d5fa623559d7cb0ee8d709b3893b58de79b75ff606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_einstein, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-665a486d2beeb63aee9667f1f3ae0e12346a64665a219dc931869b9e7eaf1b54-merged.mount: Deactivated successfully.
Dec  3 19:02:44 compute-0 podman[452447]: 2025-12-03 19:02:44.811709845 +0000 UTC m=+0.257014643 container remove 2f1714141bbcadebac95e0d5fa623559d7cb0ee8d709b3893b58de79b75ff606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 19:02:44 compute-0 systemd[1]: libpod-conmon-2f1714141bbcadebac95e0d5fa623559d7cb0ee8d709b3893b58de79b75ff606.scope: Deactivated successfully.
Dec  3 19:02:45 compute-0 podman[452486]: 2025-12-03 19:02:45.062524347 +0000 UTC m=+0.084536074 container create 09a28cc2e9dc6c84cee9467c41895b7154875d071381a9bd7a3460e31500551b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:02:45 compute-0 podman[452486]: 2025-12-03 19:02:45.031551641 +0000 UTC m=+0.053563368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:02:45 compute-0 nova_compute[348325]: 2025-12-03 19:02:45.127 348329 DEBUG nova.network.neutron [req-34c788fb-1ed8-4850-9997-65747a90417c req-ec098438-7062-4010-9d44-9e885574c9ba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Updated VIF entry in instance network info cache for port 25216d9c-b16b-4d38-af2c-044877eecdba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 19:02:45 compute-0 nova_compute[348325]: 2025-12-03 19:02:45.130 348329 DEBUG nova.network.neutron [req-34c788fb-1ed8-4850-9997-65747a90417c req-ec098438-7062-4010-9d44-9e885574c9ba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Updating instance_info_cache with network_info: [{"id": "25216d9c-b16b-4d38-af2c-044877eecdba", "address": "fa:16:3e:7e:eb:8f", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25216d9c-b1", "ovs_interfaceid": "25216d9c-b16b-4d38-af2c-044877eecdba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:02:45 compute-0 systemd[1]: Started libpod-conmon-09a28cc2e9dc6c84cee9467c41895b7154875d071381a9bd7a3460e31500551b.scope.
Dec  3 19:02:45 compute-0 nova_compute[348325]: 2025-12-03 19:02:45.155 348329 DEBUG oslo_concurrency.lockutils [req-34c788fb-1ed8-4850-9997-65747a90417c req-ec098438-7062-4010-9d44-9e885574c9ba 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-fd1bf28c-ce00-44df-b134-5fa073e2246d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:02:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd0d4dd7a810b1d6201752947f1e25fae451c284afb1dec22df76ddfac82ab0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd0d4dd7a810b1d6201752947f1e25fae451c284afb1dec22df76ddfac82ab0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd0d4dd7a810b1d6201752947f1e25fae451c284afb1dec22df76ddfac82ab0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd0d4dd7a810b1d6201752947f1e25fae451c284afb1dec22df76ddfac82ab0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:02:45 compute-0 podman[452486]: 2025-12-03 19:02:45.187736544 +0000 UTC m=+0.209748271 container init 09a28cc2e9dc6c84cee9467c41895b7154875d071381a9bd7a3460e31500551b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 19:02:45 compute-0 podman[452486]: 2025-12-03 19:02:45.201556861 +0000 UTC m=+0.223568558 container start 09a28cc2e9dc6c84cee9467c41895b7154875d071381a9bd7a3460e31500551b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:02:45 compute-0 podman[452486]: 2025-12-03 19:02:45.209086265 +0000 UTC m=+0.231097952 container attach 09a28cc2e9dc6c84cee9467c41895b7154875d071381a9bd7a3460e31500551b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:02:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.6 MiB/s wr, 82 op/s
Dec  3 19:02:46 compute-0 recursing_moore[452502]: {
Dec  3 19:02:46 compute-0 recursing_moore[452502]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "osd_id": 1,
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "type": "bluestore"
Dec  3 19:02:46 compute-0 recursing_moore[452502]:    },
Dec  3 19:02:46 compute-0 recursing_moore[452502]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "osd_id": 2,
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "type": "bluestore"
Dec  3 19:02:46 compute-0 recursing_moore[452502]:    },
Dec  3 19:02:46 compute-0 recursing_moore[452502]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "osd_id": 0,
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:02:46 compute-0 recursing_moore[452502]:        "type": "bluestore"
Dec  3 19:02:46 compute-0 recursing_moore[452502]:    }
Dec  3 19:02:46 compute-0 recursing_moore[452502]: }
Dec  3 19:02:46 compute-0 systemd[1]: libpod-09a28cc2e9dc6c84cee9467c41895b7154875d071381a9bd7a3460e31500551b.scope: Deactivated successfully.
Dec  3 19:02:46 compute-0 systemd[1]: libpod-09a28cc2e9dc6c84cee9467c41895b7154875d071381a9bd7a3460e31500551b.scope: Consumed 1.082s CPU time.
Dec  3 19:02:46 compute-0 podman[452535]: 2025-12-03 19:02:46.384087024 +0000 UTC m=+0.067067488 container died 09a28cc2e9dc6c84cee9467c41895b7154875d071381a9bd7a3460e31500551b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:02:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cd0d4dd7a810b1d6201752947f1e25fae451c284afb1dec22df76ddfac82ab0-merged.mount: Deactivated successfully.
Dec  3 19:02:46 compute-0 podman[452535]: 2025-12-03 19:02:46.45194176 +0000 UTC m=+0.134922164 container remove 09a28cc2e9dc6c84cee9467c41895b7154875d071381a9bd7a3460e31500551b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:02:46 compute-0 systemd[1]: libpod-conmon-09a28cc2e9dc6c84cee9467c41895b7154875d071381a9bd7a3460e31500551b.scope: Deactivated successfully.
Dec  3 19:02:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:02:46 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:02:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:02:46 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:02:46 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 348a8693-daff-49a2-aa79-9788496c016e does not exist
Dec  3 19:02:46 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 7b5f4efc-b021-4406-836c-23cf9c5ae114 does not exist
Dec  3 19:02:46 compute-0 nova_compute[348325]: 2025-12-03 19:02:46.633 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:02:46 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:02:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 816 KiB/s wr, 87 op/s
Dec  3 19:02:48 compute-0 nova_compute[348325]: 2025-12-03 19:02:48.528 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec  3 19:02:51 compute-0 nova_compute[348325]: 2025-12-03 19:02:51.636 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 75 op/s
Dec  3 19:02:53 compute-0 nova_compute[348325]: 2025-12-03 19:02:53.531 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 57 op/s
Dec  3 19:02:53 compute-0 podman[452599]: 2025-12-03 19:02:53.94955433 +0000 UTC m=+0.113685386 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 19:02:53 compute-0 podman[452601]: 2025-12-03 19:02:53.977766419 +0000 UTC m=+0.119173110 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 19:02:53 compute-0 podman[452600]: 2025-12-03 19:02:53.982291539 +0000 UTC m=+0.136321308 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:02:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 9.7 KiB/s wr, 47 op/s
Dec  3 19:02:56 compute-0 nova_compute[348325]: 2025-12-03 19:02:56.638 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 578 KiB/s rd, 8.3 KiB/s wr, 19 op/s
Dec  3 19:02:57 compute-0 podman[452657]: 2025-12-03 19:02:57.941937006 +0000 UTC m=+0.102748179 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, architecture=x86_64, maintainer=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 19:02:57 compute-0 podman[452658]: 2025-12-03 19:02:57.961414961 +0000 UTC m=+0.100477504 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 19:02:57 compute-0 podman[452659]: 2025-12-03 19:02:57.98309711 +0000 UTC m=+0.126160670 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:02:58 compute-0 nova_compute[348325]: 2025-12-03 19:02:58.535 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:02:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:02:59 compute-0 podman[158200]: time="2025-12-03T19:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:02:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45044 "" "Go-http-client/1.1"
Dec  3 19:02:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9112 "" "Go-http-client/1.1"
Dec  3 19:02:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Dec  3 19:03:01 compute-0 openstack_network_exporter[365222]: ERROR   19:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:03:01 compute-0 openstack_network_exporter[365222]: ERROR   19:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:03:01 compute-0 openstack_network_exporter[365222]: ERROR   19:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:03:01 compute-0 openstack_network_exporter[365222]: ERROR   19:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:03:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:03:01 compute-0 openstack_network_exporter[365222]: ERROR   19:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:03:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:03:01 compute-0 nova_compute[348325]: 2025-12-03 19:03:01.641 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Dec  3 19:03:03 compute-0 nova_compute[348325]: 2025-12-03 19:03:03.538 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:06 compute-0 nova_compute[348325]: 2025-12-03 19:03:06.644 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:07 compute-0 podman[452712]: 2025-12-03 19:03:07.945783465 +0000 UTC m=+0.101372226 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 19:03:08 compute-0 nova_compute[348325]: 2025-12-03 19:03:08.540 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:11 compute-0 nova_compute[348325]: 2025-12-03 19:03:11.648 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:11 compute-0 podman[452737]: 2025-12-03 19:03:11.9824375 +0000 UTC m=+0.121775043 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 19:03:12 compute-0 podman[452736]: 2025-12-03 19:03:12.025069401 +0000 UTC m=+0.170860362 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec  3 19:03:12 compute-0 ovn_controller[89305]: 2025-12-03T19:03:12Z|00165|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.254 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.255 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.255 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.263 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 19:03:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:13.264 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 19:03:13 compute-0 nova_compute[348325]: 2025-12-03 19:03:13.542 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:03:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:03:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:03:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:03:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:03:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:03:14
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data']
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.128 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1851 Content-Type: application/json Date: Wed, 03 Dec 2025 19:03:13 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-7ea4a510-aa98-4312-8523-7ef11fe1f190 x-openstack-request-id: req-7ea4a510-aa98-4312-8523-7ef11fe1f190 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.128 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc", "name": "tempest-TestNetworkBasicOps-server-127548925", "status": "ACTIVE", "tenant_id": "014032eeba1145f99481402acd561743", "user_id": "8fabb3dd3b1c42b491c99a1274242f68", "metadata": {}, "hostId": "ab34ae4a5ec0f962e5493495fa7cf58c86342786dc64a895c1aa98b1", "image": {"id": "55982930-937b-484e-96ee-69e406a48023", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/55982930-937b-484e-96ee-69e406a48023"}]}, "flavor": {"id": "a94cfbfb-a20a-4689-ac91-e7436db75880", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a94cfbfb-a20a-4689-ac91-e7436db75880"}]}, "created": "2025-12-03T19:01:20Z", "updated": "2025-12-03T19:01:32Z", "addresses": {"tempest-network-smoke--493824106": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ec:63:6e"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1544093464", "OS-SRV-USG:launched_at": "2025-12-03T19:01:32.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-349049751"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.128 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc used request id req-7ea4a510-aa98-4312-8523-7ef11fe1f190 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.130 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3bb34e64-ac61-46f3-99eb-2fdd346a8ecc', 'name': 'tempest-TestNetworkBasicOps-server-127548925', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '55982930-937b-484e-96ee-69e406a48023'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '014032eeba1145f99481402acd561743', 'user_id': '8fabb3dd3b1c42b491c99a1274242f68', 'hostId': 'ab34ae4a5ec0f962e5493495fa7cf58c86342786dc64a895c1aa98b1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.134 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a4fc45c7-44e4-4b50-a3e0-98de13268f88', 'name': 'te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.137 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance fd1bf28c-ce00-44df-b134-5fa073e2246d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.138 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/fd1bf28c-ce00-44df-b134-5fa073e2246d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 19:03:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.562 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1973 Content-Type: application/json Date: Wed, 03 Dec 2025 19:03:14 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f7198f46-a1e6-49f1-bfa1-083cd77fbd64 x-openstack-request-id: req-f7198f46-a1e6-49f1-bfa1-083cd77fbd64 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.563 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "fd1bf28c-ce00-44df-b134-5fa073e2246d", "name": "tempest-TestNetworkBasicOps-server-675437755", "status": "ACTIVE", "tenant_id": "014032eeba1145f99481402acd561743", "user_id": "8fabb3dd3b1c42b491c99a1274242f68", "metadata": {}, "hostId": "ab34ae4a5ec0f962e5493495fa7cf58c86342786dc64a895c1aa98b1", "image": {"id": "55982930-937b-484e-96ee-69e406a48023", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/55982930-937b-484e-96ee-69e406a48023"}]}, "flavor": {"id": "a94cfbfb-a20a-4689-ac91-e7436db75880", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a94cfbfb-a20a-4689-ac91-e7436db75880"}]}, "created": "2025-12-03T19:02:29Z", "updated": "2025-12-03T19:02:39Z", "addresses": {"tempest-network-smoke--493824106": [{"version": 4, "addr": "10.100.0.7", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7e:eb:8f"}, {"version": 4, "addr": "192.168.122.202", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7e:eb:8f"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/fd1bf28c-ce00-44df-b134-5fa073e2246d"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/fd1bf28c-ce00-44df-b134-5fa073e2246d"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1344632227", "OS-SRV-USG:launched_at": "2025-12-03T19:02:39.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-245397841"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.563 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/fd1bf28c-ce00-44df-b134-5fa073e2246d used request id req-f7198f46-a1e6-49f1-bfa1-083cd77fbd64 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.565 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd1bf28c-ce00-44df-b134-5fa073e2246d', 'name': 'tempest-TestNetworkBasicOps-server-675437755', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '55982930-937b-484e-96ee-69e406a48023'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '014032eeba1145f99481402acd561743', 'user_id': '8fabb3dd3b1c42b491c99a1274242f68', 'hostId': 'ab34ae4a5ec0f962e5493495fa7cf58c86342786dc64a895c1aa98b1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.565 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T19:03:14.566549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.573 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc / tap92566cef-01 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.573 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:03:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.586 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.592 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for fd1bf28c-ce00-44df-b134-5fa073e2246d / tap25216d9c-b1 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.592 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T19:03:14.594529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.595 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/network.outgoing.bytes volume: 15952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.595 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.596 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.597 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.598 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.598 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T19:03:14.598561) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.599 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/network.outgoing.packets volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.601 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.601 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.603 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.604 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.604 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T19:03:14.604038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.605 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.606 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.606 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.607 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.608 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T19:03:14.608151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.608 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-127548925>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-675437755>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-127548925>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-675437755>]
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.609 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.609 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.610 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.610 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.610 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T19:03:14.610653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.610 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.611 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/network.incoming.bytes volume: 20482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.612 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.612 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.614 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.614 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.615 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.616 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T19:03:14.615384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.649 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.650 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.668 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.668 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.717 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.718 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.719 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.719 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.720 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T19:03:14.719435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.720 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.721 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.721 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T19:03:14.721326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.744 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/memory.usage volume: 42.73828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.776 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/memory.usage volume: 43.62890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.796 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/memory.usage volume: 40.453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.797 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.797 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.797 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.797 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.797 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.798 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.798 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.798 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.798 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T19:03:14.797678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.799 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.799 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.799 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.799 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.800 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.800 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.800 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.801 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.801 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.801 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T19:03:14.799772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.803 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.803 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.803 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.803 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T19:03:14.803441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.835 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.read.bytes volume: 31095296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.835 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.874 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 29154304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.874 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.921 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.921 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.922 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.922 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.922 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.922 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.922 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.922 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.922 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.922 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-127548925>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-675437755>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-127548925>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-675437755>]
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.922 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.922 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.923 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.923 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.923 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.923 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.read.latency volume: 2197060773 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.923 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.read.latency volume: 147068847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.923 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 1719418496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.923 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 125457767 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.924 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.read.latency volume: 1677196312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.924 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.read.latency volume: 1529097 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.924 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.924 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.924 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.924 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.925 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.925 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.read.requests volume: 1142 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.925 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T19:03:14.922395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.925 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 1046 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T19:03:14.923198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T19:03:14.925058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.925 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.926 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.927 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.927 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.928 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.928 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.928 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.928 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.928 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.929 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.929 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T19:03:14.928243) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.929 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.930 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.930 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.write.bytes volume: 73097216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.930 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.931 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.931 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.931 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.932 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.932 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.932 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.932 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T19:03:14.930376) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.933 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.933 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.933 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.933 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.934 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.934 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.934 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T19:03:14.933138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.934 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.934 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.934 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T19:03:14.934661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.934 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.write.latency volume: 10838390914 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.935 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.935 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 8765791521 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.935 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.935 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.936 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.936 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.936 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.936 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T19:03:14.936879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.937 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.write.requests volume: 322 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.937 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.937 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.937 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.937 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.938 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.938 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.938 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.938 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.938 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.939 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.939 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.939 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.939 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.939 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.939 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T19:03:14.939047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.940 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.940 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.941 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.941 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.941 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.941 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/network.incoming.packets volume: 119 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.941 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.942 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.942 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.942 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.942 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.942 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.942 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.942 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T19:03:14.940652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.942 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T19:03:14.941598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.943 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.943 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T19:03:14.942850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.943 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.944 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.944 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.944 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.944 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.944 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.945 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.945 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.945 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T19:03:14.944017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.945 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.945 14 DEBUG ceilometer.compute.pollsters [-] 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc/cpu volume: 36940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T19:03:14.945412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.945 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/cpu volume: 203790000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.946 14 DEBUG ceilometer.compute.pollsters [-] fd1bf28c-ce00-44df-b134-5fa073e2246d/cpu volume: 34440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.946 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.946 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.947 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.947 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.947 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.947 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.948 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.948 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.948 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.948 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.948 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.949 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.949 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.949 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.949 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.949 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.949 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.950 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.950 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.951 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.951 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.952 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.952 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.953 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.954 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.954 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:03:14.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:03:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 282 MiB data, 406 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:16 compute-0 nova_compute[348325]: 2025-12-03 19:03:16.539 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:03:16 compute-0 nova_compute[348325]: 2025-12-03 19:03:16.651 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 283 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 193 KiB/s wr, 4 op/s
Dec  3 19:03:18 compute-0 nova_compute[348325]: 2025-12-03 19:03:18.547 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:19 compute-0 ovn_controller[89305]: 2025-12-03T19:03:19Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:eb:8f 10.100.0.7
Dec  3 19:03:19 compute-0 ovn_controller[89305]: 2025-12-03T19:03:19Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:eb:8f 10.100.0.7
Dec  3 19:03:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 294 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 1.1 MiB/s wr, 19 op/s
Dec  3 19:03:21 compute-0 nova_compute[348325]: 2025-12-03 19:03:21.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:03:21 compute-0 nova_compute[348325]: 2025-12-03 19:03:21.653 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 313 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Dec  3 19:03:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:23.359 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:23.360 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:23.360 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:23 compute-0 nova_compute[348325]: 2025-12-03 19:03:23.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:03:23 compute-0 nova_compute[348325]: 2025-12-03 19:03:23.549 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 315 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec  3 19:03:24 compute-0 nova_compute[348325]: 2025-12-03 19:03:24.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:03:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022742580744937162 of space, bias 1.0, pg target 0.6822774223481148 quantized to 32 (current 32)
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:03:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:03:24 compute-0 podman[452783]: 2025-12-03 19:03:24.956847467 +0000 UTC m=+0.107770802 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:03:24 compute-0 podman[452784]: 2025-12-03 19:03:24.968341838 +0000 UTC m=+0.120485313 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:03:24 compute-0 podman[452785]: 2025-12-03 19:03:24.965855027 +0000 UTC m=+0.112348273 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=)
Dec  3 19:03:25 compute-0 nova_compute[348325]: 2025-12-03 19:03:25.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:03:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 315 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  3 19:03:26 compute-0 nova_compute[348325]: 2025-12-03 19:03:26.528 348329 INFO nova.compute.manager [None req-6131cbdc-d29b-44c7-a5a5-31774ee01eb2 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Get console output#033[00m
Dec  3 19:03:26 compute-0 nova_compute[348325]: 2025-12-03 19:03:26.537 451114 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  3 19:03:26 compute-0 nova_compute[348325]: 2025-12-03 19:03:26.655 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:26 compute-0 nova_compute[348325]: 2025-12-03 19:03:26.882 348329 DEBUG oslo_concurrency.lockutils [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "fd1bf28c-ce00-44df-b134-5fa073e2246d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:26 compute-0 nova_compute[348325]: 2025-12-03 19:03:26.883 348329 DEBUG oslo_concurrency.lockutils [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:26 compute-0 nova_compute[348325]: 2025-12-03 19:03:26.884 348329 DEBUG oslo_concurrency.lockutils [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:26 compute-0 nova_compute[348325]: 2025-12-03 19:03:26.885 348329 DEBUG oslo_concurrency.lockutils [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:26 compute-0 nova_compute[348325]: 2025-12-03 19:03:26.885 348329 DEBUG oslo_concurrency.lockutils [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:26 compute-0 nova_compute[348325]: 2025-12-03 19:03:26.887 348329 INFO nova.compute.manager [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Terminating instance#033[00m
Dec  3 19:03:26 compute-0 nova_compute[348325]: 2025-12-03 19:03:26.889 348329 DEBUG nova.compute.manager [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 19:03:26 compute-0 kernel: tap25216d9c-b1 (unregistering): left promiscuous mode
Dec  3 19:03:27 compute-0 NetworkManager[49087]: <info>  [1764788607.0096] device (tap25216d9c-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 19:03:27 compute-0 ovn_controller[89305]: 2025-12-03T19:03:27Z|00166|binding|INFO|Releasing lport 25216d9c-b16b-4d38-af2c-044877eecdba from this chassis (sb_readonly=0)
Dec  3 19:03:27 compute-0 ovn_controller[89305]: 2025-12-03T19:03:27Z|00167|binding|INFO|Setting lport 25216d9c-b16b-4d38-af2c-044877eecdba down in Southbound
Dec  3 19:03:27 compute-0 ovn_controller[89305]: 2025-12-03T19:03:27Z|00168|binding|INFO|Removing iface tap25216d9c-b1 ovn-installed in OVS
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.017 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.055 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:27 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec  3 19:03:27 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 42.137s CPU time.
Dec  3 19:03:27 compute-0 systemd-machined[138702]: Machine qemu-15-instance-0000000e terminated.
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.121 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:eb:8f 10.100.0.7'], port_security=['fa:16:3e:7e:eb:8f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fd1bf28c-ce00-44df-b134-5fa073e2246d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9057d7e-a146-4d5d-b454-162ed672215e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '014032eeba1145f99481402acd561743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ebf03a8-1d0d-487d-a6e4-4f3166db9cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f5d93a-4b7a-44a9-a795-c197381d4f0f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=25216d9c-b16b-4d38-af2c-044877eecdba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.123 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 25216d9c-b16b-4d38-af2c-044877eecdba in datapath d9057d7e-a146-4d5d-b454-162ed672215e unbound from our chassis#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.125 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9057d7e-a146-4d5d-b454-162ed672215e#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.144 348329 INFO nova.virt.libvirt.driver [-] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Instance destroyed successfully.#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.145 348329 DEBUG nova.objects.instance [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lazy-loading 'resources' on Instance uuid fd1bf28c-ce00-44df-b134-5fa073e2246d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.145 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a439bd-0cdf-4a9d-a539-193551607a4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.171 348329 DEBUG nova.virt.libvirt.vif [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T19:02:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-675437755',display_name='tempest-TestNetworkBasicOps-server-675437755',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-675437755',id=14,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBS8ezMjtIG805yQFgF4j/ECOsDiCMS7aCiCqdC+a8KK5knfknL/g2dlwag5/vklyq6F8zAud17+RXTTXhjSnt3CAAq1zM4H8s6/fatjZ3kNM4cGkNnON7zjRpS6sbgnrA==',key_name='tempest-TestNetworkBasicOps-1344632227',keypairs=<?>,launch_index=0,launched_at=2025-12-03T19:02:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='014032eeba1145f99481402acd561743',ramdisk_id='',reservation_id='r-aw0tghxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1083905166',owner_user_name='tempest-TestNetworkBasicOps-1083905166-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T19:02:39Z,user_data=None,user_id='8fabb3dd3b1c42b491c99a1274242f68',uuid=fd1bf28c-ce00-44df-b134-5fa073e2246d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25216d9c-b16b-4d38-af2c-044877eecdba", "address": "fa:16:3e:7e:eb:8f", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25216d9c-b1", "ovs_interfaceid": "25216d9c-b16b-4d38-af2c-044877eecdba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.172 348329 DEBUG nova.network.os_vif_util [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converting VIF {"id": "25216d9c-b16b-4d38-af2c-044877eecdba", "address": "fa:16:3e:7e:eb:8f", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25216d9c-b1", "ovs_interfaceid": "25216d9c-b16b-4d38-af2c-044877eecdba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.173 348329 DEBUG nova.network.os_vif_util [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:eb:8f,bridge_name='br-int',has_traffic_filtering=True,id=25216d9c-b16b-4d38-af2c-044877eecdba,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25216d9c-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.174 348329 DEBUG os_vif [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:eb:8f,bridge_name='br-int',has_traffic_filtering=True,id=25216d9c-b16b-4d38-af2c-044877eecdba,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25216d9c-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.177 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.178 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25216d9c-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.179 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[80265093-52af-4e6e-8dc1-1669da64eb2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.184 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[8c17a774-c27d-4b3a-99c3-827530abb547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.184 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.187 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.189 348329 INFO os_vif [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:eb:8f,bridge_name='br-int',has_traffic_filtering=True,id=25216d9c-b16b-4d38-af2c-044877eecdba,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25216d9c-b1')#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.215 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[8a05d14b-7939-4b37-83f7-8a83a3682c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.237 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f62cedde-0913-4b9c-8cf2-2e6f25b417aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9057d7e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:6f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677036, 'reachable_time': 35272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452880, 'error': None, 'target': 'ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.256 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[d58d1819-04a1-40cf-9dad-ad9ba6500f0a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9057d7e-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677047, 'tstamp': 677047}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 452881, 'error': None, 'target': 'ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9057d7e-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677050, 'tstamp': 677050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 452881, 'error': None, 'target': 'ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.258 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9057d7e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.261 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.262 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9057d7e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.263 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.263 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9057d7e-a0, col_values=(('external_ids', {'iface-id': '61129b3d-6cea-46e4-9162-185c7245839a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:03:27 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:27.264 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.842 348329 INFO nova.virt.libvirt.driver [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Deleting instance files /var/lib/nova/instances/fd1bf28c-ce00-44df-b134-5fa073e2246d_del#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.844 348329 INFO nova.virt.libvirt.driver [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Deletion of /var/lib/nova/instances/fd1bf28c-ce00-44df-b134-5fa073e2246d_del complete#033[00m
Dec  3 19:03:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 315 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.927 348329 INFO nova.compute.manager [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Took 1.04 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.928 348329 DEBUG oslo.service.loopingcall [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.930 348329 DEBUG nova.compute.manager [-] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 19:03:27 compute-0 nova_compute[348325]: 2025-12-03 19:03:27.931 348329 DEBUG nova.network.neutron [-] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 19:03:28 compute-0 nova_compute[348325]: 2025-12-03 19:03:28.346 348329 DEBUG nova.compute.manager [req-e43e699b-d111-43bb-b427-c5a0c079d0b5 req-7f752372-7d7c-46ad-8226-dc21975f8952 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Received event network-vif-unplugged-25216d9c-b16b-4d38-af2c-044877eecdba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:03:28 compute-0 nova_compute[348325]: 2025-12-03 19:03:28.347 348329 DEBUG oslo_concurrency.lockutils [req-e43e699b-d111-43bb-b427-c5a0c079d0b5 req-7f752372-7d7c-46ad-8226-dc21975f8952 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:28 compute-0 nova_compute[348325]: 2025-12-03 19:03:28.347 348329 DEBUG oslo_concurrency.lockutils [req-e43e699b-d111-43bb-b427-c5a0c079d0b5 req-7f752372-7d7c-46ad-8226-dc21975f8952 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:28 compute-0 nova_compute[348325]: 2025-12-03 19:03:28.348 348329 DEBUG oslo_concurrency.lockutils [req-e43e699b-d111-43bb-b427-c5a0c079d0b5 req-7f752372-7d7c-46ad-8226-dc21975f8952 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:28 compute-0 nova_compute[348325]: 2025-12-03 19:03:28.348 348329 DEBUG nova.compute.manager [req-e43e699b-d111-43bb-b427-c5a0c079d0b5 req-7f752372-7d7c-46ad-8226-dc21975f8952 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] No waiting events found dispatching network-vif-unplugged-25216d9c-b16b-4d38-af2c-044877eecdba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:03:28 compute-0 nova_compute[348325]: 2025-12-03 19:03:28.349 348329 DEBUG nova.compute.manager [req-e43e699b-d111-43bb-b427-c5a0c079d0b5 req-7f752372-7d7c-46ad-8226-dc21975f8952 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Received event network-vif-unplugged-25216d9c-b16b-4d38-af2c-044877eecdba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 19:03:28 compute-0 nova_compute[348325]: 2025-12-03 19:03:28.553 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:28 compute-0 podman[452888]: 2025-12-03 19:03:28.941280268 +0000 UTC m=+0.091388261 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 19:03:28 compute-0 podman[452887]: 2025-12-03 19:03:28.960125199 +0000 UTC m=+0.106997164 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 19:03:28 compute-0 podman[452886]: 2025-12-03 19:03:28.990821337 +0000 UTC m=+0.136629825 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, build-date=2024-09-18T21:23:30, container_name=kepler, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, version=9.4, io.openshift.expose-services=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 19:03:29 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:29.136 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:03:29 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:29.137 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 19:03:29 compute-0 nova_compute[348325]: 2025-12-03 19:03:29.138 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:29 compute-0 podman[158200]: time="2025-12-03T19:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:03:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45044 "" "Go-http-client/1.1"
Dec  3 19:03:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9110 "" "Go-http-client/1.1"
Dec  3 19:03:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 293 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.0 MiB/s wr, 58 op/s
Dec  3 19:03:29 compute-0 nova_compute[348325]: 2025-12-03 19:03:29.920 348329 DEBUG nova.network.neutron [-] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:03:29 compute-0 nova_compute[348325]: 2025-12-03 19:03:29.947 348329 INFO nova.compute.manager [-] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Took 2.02 seconds to deallocate network for instance.#033[00m
Dec  3 19:03:29 compute-0 nova_compute[348325]: 2025-12-03 19:03:29.996 348329 DEBUG oslo_concurrency.lockutils [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:29 compute-0 nova_compute[348325]: 2025-12-03 19:03:29.998 348329 DEBUG oslo_concurrency.lockutils [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.038 348329 DEBUG nova.compute.manager [req-4f5f5aeb-0972-40d7-bfdd-83332e255f94 req-11413f87-3cd1-4028-b1e6-5601384d3593 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Received event network-vif-deleted-25216d9c-b16b-4d38-af2c-044877eecdba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.122 348329 DEBUG oslo_concurrency.processutils [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.464 348329 DEBUG nova.compute.manager [req-4d1cd4cc-a04e-4214-8262-2ecbf99185d4 req-dc9e7a1a-a393-4449-bca0-14d071eddca7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Received event network-vif-plugged-25216d9c-b16b-4d38-af2c-044877eecdba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.466 348329 DEBUG oslo_concurrency.lockutils [req-4d1cd4cc-a04e-4214-8262-2ecbf99185d4 req-dc9e7a1a-a393-4449-bca0-14d071eddca7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.468 348329 DEBUG oslo_concurrency.lockutils [req-4d1cd4cc-a04e-4214-8262-2ecbf99185d4 req-dc9e7a1a-a393-4449-bca0-14d071eddca7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.469 348329 DEBUG oslo_concurrency.lockutils [req-4d1cd4cc-a04e-4214-8262-2ecbf99185d4 req-dc9e7a1a-a393-4449-bca0-14d071eddca7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.470 348329 DEBUG nova.compute.manager [req-4d1cd4cc-a04e-4214-8262-2ecbf99185d4 req-dc9e7a1a-a393-4449-bca0-14d071eddca7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] No waiting events found dispatching network-vif-plugged-25216d9c-b16b-4d38-af2c-044877eecdba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.472 348329 WARNING nova.compute.manager [req-4d1cd4cc-a04e-4214-8262-2ecbf99185d4 req-dc9e7a1a-a393-4449-bca0-14d071eddca7 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Received unexpected event network-vif-plugged-25216d9c-b16b-4d38-af2c-044877eecdba for instance with vm_state deleted and task_state None.#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:03:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:03:30 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2961610263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.632 348329 DEBUG oslo_concurrency.processutils [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.646 348329 DEBUG nova.compute.provider_tree [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.670 348329 DEBUG nova.scheduler.client.report [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.698 348329 DEBUG oslo_concurrency.lockutils [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.740 348329 INFO nova.scheduler.client.report [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Deleted allocations for instance fd1bf28c-ce00-44df-b134-5fa073e2246d#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.750 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.752 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.753 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:03:30 compute-0 nova_compute[348325]: 2025-12-03 19:03:30.872 348329 DEBUG oslo_concurrency.lockutils [None req-09003147-df02-46c2-bac6-776594467b22 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "fd1bf28c-ce00-44df-b134-5fa073e2246d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:31 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:31.140 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:03:31 compute-0 openstack_network_exporter[365222]: ERROR   19:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:03:31 compute-0 openstack_network_exporter[365222]: ERROR   19:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:03:31 compute-0 openstack_network_exporter[365222]: ERROR   19:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:03:31 compute-0 openstack_network_exporter[365222]: ERROR   19:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:03:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:03:31 compute-0 openstack_network_exporter[365222]: ERROR   19:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:03:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:03:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 236 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 1.1 MiB/s wr, 70 op/s
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.181 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.399 348329 DEBUG oslo_concurrency.lockutils [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.400 348329 DEBUG oslo_concurrency.lockutils [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.400 348329 DEBUG oslo_concurrency.lockutils [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.401 348329 DEBUG oslo_concurrency.lockutils [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.401 348329 DEBUG oslo_concurrency.lockutils [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.403 348329 INFO nova.compute.manager [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Terminating instance#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.404 348329 DEBUG nova.compute.manager [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 19:03:32 compute-0 kernel: tap92566cef-01 (unregistering): left promiscuous mode
Dec  3 19:03:32 compute-0 NetworkManager[49087]: <info>  [1764788612.4881] device (tap92566cef-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 19:03:32 compute-0 ovn_controller[89305]: 2025-12-03T19:03:32Z|00169|binding|INFO|Releasing lport 92566cef-01e0-4398-bbab-0b7049af2e6b from this chassis (sb_readonly=0)
Dec  3 19:03:32 compute-0 ovn_controller[89305]: 2025-12-03T19:03:32Z|00170|binding|INFO|Setting lport 92566cef-01e0-4398-bbab-0b7049af2e6b down in Southbound
Dec  3 19:03:32 compute-0 ovn_controller[89305]: 2025-12-03T19:03:32Z|00171|binding|INFO|Removing iface tap92566cef-01 ovn-installed in OVS
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.506 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.519 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:63:6e 10.100.0.8'], port_security=['fa:16:3e:ec:63:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3bb34e64-ac61-46f3-99eb-2fdd346a8ecc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9057d7e-a146-4d5d-b454-162ed672215e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '014032eeba1145f99481402acd561743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '91f53025-8b3a-4fbb-a061-2ee6f0cf5b08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f5d93a-4b7a-44a9-a795-c197381d4f0f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=92566cef-01e0-4398-bbab-0b7049af2e6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.520 286999 INFO neutron.agent.ovn.metadata.agent [-] Port 92566cef-01e0-4398-bbab-0b7049af2e6b in datapath d9057d7e-a146-4d5d-b454-162ed672215e unbound from our chassis#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.522 286999 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9057d7e-a146-4d5d-b454-162ed672215e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.526 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[01d302c7-2535-452d-8d33-56385463dd8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.527 286999 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e namespace which is not needed anymore#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.547 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:32 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec  3 19:03:32 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 50.268s CPU time.
Dec  3 19:03:32 compute-0 systemd-machined[138702]: Machine qemu-14-instance-0000000d terminated.
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.630 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.643 348329 INFO nova.virt.libvirt.driver [-] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Instance destroyed successfully.#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.643 348329 DEBUG nova.objects.instance [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lazy-loading 'resources' on Instance uuid 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:03:32 compute-0 neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e[450260]: [NOTICE]   (450264) : haproxy version is 2.8.14-c23fe91
Dec  3 19:03:32 compute-0 neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e[450260]: [NOTICE]   (450264) : path to executable is /usr/sbin/haproxy
Dec  3 19:03:32 compute-0 neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e[450260]: [WARNING]  (450264) : Exiting Master process...
Dec  3 19:03:32 compute-0 neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e[450260]: [ALERT]    (450264) : Current worker (450266) exited with code 143 (Terminated)
Dec  3 19:03:32 compute-0 neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e[450260]: [WARNING]  (450264) : All workers exited. Exiting... (0)
Dec  3 19:03:32 compute-0 systemd[1]: libpod-8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094.scope: Deactivated successfully.
Dec  3 19:03:32 compute-0 conmon[450260]: conmon 8ff60ed72243f63d9d94 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094.scope/container/memory.events
Dec  3 19:03:32 compute-0 podman[452989]: 2025-12-03 19:03:32.719439034 +0000 UTC m=+0.071962937 container died 8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 19:03:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca2e43ff164973a8cce8744ccec369895addb289e3194db889ab9f9aef58cf13-merged.mount: Deactivated successfully.
Dec  3 19:03:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094-userdata-shm.mount: Deactivated successfully.
Dec  3 19:03:32 compute-0 podman[452989]: 2025-12-03 19:03:32.778941487 +0000 UTC m=+0.131465350 container cleanup 8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:03:32 compute-0 systemd[1]: libpod-conmon-8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094.scope: Deactivated successfully.
Dec  3 19:03:32 compute-0 podman[453020]: 2025-12-03 19:03:32.857709789 +0000 UTC m=+0.049467668 container remove 8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.867 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[9025314a-98ea-40cc-b122-5ba1b8650a43]: (4, ('Wed Dec  3 07:03:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e (8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094)\n8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094\nWed Dec  3 07:03:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e (8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094)\n8ff60ed72243f63d9d94b25e808526df4f201de8be32fe85f9f12fa7a853c094\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.869 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1accaf-d60e-418a-8042-ecbffa9fcc45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.871 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9057d7e-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.873 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:32 compute-0 kernel: tapd9057d7e-a0: left promiscuous mode
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.882 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[26421853-2ed3-4398-bec2-c24f24da7068]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.902 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.908 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[99119eb0-2b35-4a97-8504-0b66c7448947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.910 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[acab05d1-c8be-4af4-ad88-6798e8e8cd40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.921 348329 DEBUG nova.virt.libvirt.vif [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T19:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-127548925',display_name='tempest-TestNetworkBasicOps-server-127548925',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-127548925',id=13,image_ref='55982930-937b-484e-96ee-69e406a48023',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI7rF1psYhD8tU9znnWXqUeQ6mnu/Zs10NhYFDe3X+4zojSzevNC27h/7cu/TNcDKRquyHQ51V1La4K+wMiQHVkejByxABkEgQekf3AyU+0qmLU9mTIvdAbamP7MzcryHQ==',key_name='tempest-TestNetworkBasicOps-1544093464',keypairs=<?>,launch_index=0,launched_at=2025-12-03T19:01:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='014032eeba1145f99481402acd561743',ramdisk_id='',reservation_id='r-xf0avdq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='55982930-937b-484e-96ee-69e406a48023',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1083905166',owner_user_name='tempest-TestNetworkBasicOps-1083905166-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T19:01:32Z,user_data=None,user_id='8fabb3dd3b1c42b491c99a1274242f68',uuid=3bb34e64-ac61-46f3-99eb-2fdd346a8ecc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.922 348329 DEBUG nova.network.os_vif_util [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converting VIF {"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.924 348329 DEBUG nova.network.os_vif_util [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:63:6e,bridge_name='br-int',has_traffic_filtering=True,id=92566cef-01e0-4398-bbab-0b7049af2e6b,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92566cef-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.925 348329 DEBUG os_vif [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:63:6e,bridge_name='br-int',has_traffic_filtering=True,id=92566cef-01e0-4398-bbab-0b7049af2e6b,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92566cef-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.925 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[4b98b00b-a70d-41bf-9273-72eac13bc07a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677029, 'reachable_time': 28352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 453040, 'error': None, 'target': 'ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.927 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.928 287110 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9057d7e-a146-4d5d-b454-162ed672215e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 19:03:32 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:03:32.928 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[2283f05e-77a6-40d9-a15a-ef8cf422b994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:03:32 compute-0 systemd[1]: run-netns-ovnmeta\x2dd9057d7e\x2da146\x2d4d5d\x2db454\x2d162ed672215e.mount: Deactivated successfully.
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.930 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92566cef-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.933 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.934 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:32 compute-0 nova_compute[348325]: 2025-12-03 19:03:32.938 348329 INFO os_vif [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:63:6e,bridge_name='br-int',has_traffic_filtering=True,id=92566cef-01e0-4398-bbab-0b7049af2e6b,network=Network(d9057d7e-a146-4d5d-b454-162ed672215e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92566cef-01')#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.201 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Updating instance_info_cache with network_info: [{"id": "92566cef-01e0-4398-bbab-0b7049af2e6b", "address": "fa:16:3e:ec:63:6e", "network": {"id": "d9057d7e-a146-4d5d-b454-162ed672215e", "bridge": "br-int", "label": "tempest-network-smoke--493824106", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "014032eeba1145f99481402acd561743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92566cef-01", "ovs_interfaceid": "92566cef-01e0-4398-bbab-0b7049af2e6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.237 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.238 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.511 348329 INFO nova.virt.libvirt.driver [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Deleting instance files /var/lib/nova/instances/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_del#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.513 348329 INFO nova.virt.libvirt.driver [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Deletion of /var/lib/nova/instances/3bb34e64-ac61-46f3-99eb-2fdd346a8ecc_del complete#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.555 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.601 348329 DEBUG nova.compute.manager [req-ed6165ca-679b-4c60-838c-5d994683edea req-544e1323-96ae-4a82-95af-eaa4bb6840dc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Received event network-vif-unplugged-92566cef-01e0-4398-bbab-0b7049af2e6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.601 348329 DEBUG oslo_concurrency.lockutils [req-ed6165ca-679b-4c60-838c-5d994683edea req-544e1323-96ae-4a82-95af-eaa4bb6840dc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.602 348329 DEBUG oslo_concurrency.lockutils [req-ed6165ca-679b-4c60-838c-5d994683edea req-544e1323-96ae-4a82-95af-eaa4bb6840dc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.602 348329 DEBUG oslo_concurrency.lockutils [req-ed6165ca-679b-4c60-838c-5d994683edea req-544e1323-96ae-4a82-95af-eaa4bb6840dc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.603 348329 DEBUG nova.compute.manager [req-ed6165ca-679b-4c60-838c-5d994683edea req-544e1323-96ae-4a82-95af-eaa4bb6840dc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] No waiting events found dispatching network-vif-unplugged-92566cef-01e0-4398-bbab-0b7049af2e6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.603 348329 DEBUG nova.compute.manager [req-ed6165ca-679b-4c60-838c-5d994683edea req-544e1323-96ae-4a82-95af-eaa4bb6840dc 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Received event network-vif-unplugged-92566cef-01e0-4398-bbab-0b7049af2e6b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.772 348329 INFO nova.compute.manager [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Took 1.37 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.773 348329 DEBUG oslo.service.loopingcall [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.774 348329 DEBUG nova.compute.manager [-] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 19:03:33 compute-0 nova_compute[348325]: 2025-12-03 19:03:33.775 348329 DEBUG nova.network.neutron [-] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 19:03:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 236 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 56 KiB/s wr, 31 op/s
Dec  3 19:03:34 compute-0 nova_compute[348325]: 2025-12-03 19:03:34.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:03:34 compute-0 nova_compute[348325]: 2025-12-03 19:03:34.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:03:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:34 compute-0 nova_compute[348325]: 2025-12-03 19:03:34.637 348329 DEBUG nova.network.neutron [-] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:03:34 compute-0 nova_compute[348325]: 2025-12-03 19:03:34.664 348329 INFO nova.compute.manager [-] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Took 0.89 seconds to deallocate network for instance.#033[00m
Dec  3 19:03:34 compute-0 nova_compute[348325]: 2025-12-03 19:03:34.735 348329 DEBUG oslo_concurrency.lockutils [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:34 compute-0 nova_compute[348325]: 2025-12-03 19:03:34.737 348329 DEBUG oslo_concurrency.lockutils [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:34 compute-0 nova_compute[348325]: 2025-12-03 19:03:34.746 348329 DEBUG nova.compute.manager [req-68e19b50-5cdb-4423-8f57-5c310270427e req-08273af6-a413-4ccc-a259-0e9920ab4a6f 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Received event network-vif-deleted-92566cef-01e0-4398-bbab-0b7049af2e6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:03:34 compute-0 nova_compute[348325]: 2025-12-03 19:03:34.829 348329 DEBUG oslo_concurrency.processutils [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:03:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:03:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272903806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.260 348329 DEBUG oslo_concurrency.processutils [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.274 348329 DEBUG nova.compute.provider_tree [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.310 348329 DEBUG nova.scheduler.client.report [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.360 348329 DEBUG oslo_concurrency.lockutils [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.392 348329 INFO nova.scheduler.client.report [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Deleted allocations for instance 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc#033[00m
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.496 348329 DEBUG oslo_concurrency.lockutils [None req-93ae6218-17dc-4089-9de4-01dff534ec7f 8fabb3dd3b1c42b491c99a1274242f68 014032eeba1145f99481402acd561743 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.833 348329 DEBUG nova.compute.manager [req-87ecb7a3-2863-4b07-9bb7-d2b525b41050 req-7425228d-1dcd-4ac9-9507-10cd54cdab34 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Received event network-vif-plugged-92566cef-01e0-4398-bbab-0b7049af2e6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.834 348329 DEBUG oslo_concurrency.lockutils [req-87ecb7a3-2863-4b07-9bb7-d2b525b41050 req-7425228d-1dcd-4ac9-9507-10cd54cdab34 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.835 348329 DEBUG oslo_concurrency.lockutils [req-87ecb7a3-2863-4b07-9bb7-d2b525b41050 req-7425228d-1dcd-4ac9-9507-10cd54cdab34 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.837 348329 DEBUG oslo_concurrency.lockutils [req-87ecb7a3-2863-4b07-9bb7-d2b525b41050 req-7425228d-1dcd-4ac9-9507-10cd54cdab34 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "3bb34e64-ac61-46f3-99eb-2fdd346a8ecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.838 348329 DEBUG nova.compute.manager [req-87ecb7a3-2863-4b07-9bb7-d2b525b41050 req-7425228d-1dcd-4ac9-9507-10cd54cdab34 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] No waiting events found dispatching network-vif-plugged-92566cef-01e0-4398-bbab-0b7049af2e6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:03:35 compute-0 nova_compute[348325]: 2025-12-03 19:03:35.839 348329 WARNING nova.compute.manager [req-87ecb7a3-2863-4b07-9bb7-d2b525b41050 req-7425228d-1dcd-4ac9-9507-10cd54cdab34 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Received unexpected event network-vif-plugged-92566cef-01e0-4398-bbab-0b7049af2e6b for instance with vm_state deleted and task_state None.#033[00m
Dec  3 19:03:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 195 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 44 op/s
Dec  3 19:03:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:03:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3259341967' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:03:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:03:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3259341967' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:03:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 177 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 13 KiB/s wr, 46 op/s
Dec  3 19:03:37 compute-0 nova_compute[348325]: 2025-12-03 19:03:37.934 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:38 compute-0 nova_compute[348325]: 2025-12-03 19:03:38.559 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:38 compute-0 podman[453082]: 2025-12-03 19:03:38.956774004 +0000 UTC m=+0.108725214 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:03:39 compute-0 nova_compute[348325]: 2025-12-03 19:03:39.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:03:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:39 compute-0 nova_compute[348325]: 2025-12-03 19:03:39.522 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:39 compute-0 nova_compute[348325]: 2025-12-03 19:03:39.522 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:39 compute-0 nova_compute[348325]: 2025-12-03 19:03:39.523 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:39 compute-0 nova_compute[348325]: 2025-12-03 19:03:39.523 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:03:39 compute-0 nova_compute[348325]: 2025-12-03 19:03:39.523 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:03:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 55 op/s
Dec  3 19:03:40 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:03:40 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/979386823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.082 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.177 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.179 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.734 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.736 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3750MB free_disk=59.94282150268555GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.737 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.738 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.815 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.816 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.816 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.855 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:03:40 compute-0 ovn_controller[89305]: 2025-12-03T19:03:40Z|00172|binding|INFO|Releasing lport f82febe8-1e88-4e67-9f7a-5af5921c9877 from this chassis (sb_readonly=0)
Dec  3 19:03:40 compute-0 nova_compute[348325]: 2025-12-03 19:03:40.884 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:41 compute-0 ovn_controller[89305]: 2025-12-03T19:03:41Z|00173|binding|INFO|Releasing lport f82febe8-1e88-4e67-9f7a-5af5921c9877 from this chassis (sb_readonly=0)
Dec  3 19:03:41 compute-0 nova_compute[348325]: 2025-12-03 19:03:41.104 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:41 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:03:41 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2643463336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:03:41 compute-0 nova_compute[348325]: 2025-12-03 19:03:41.353 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:03:41 compute-0 nova_compute[348325]: 2025-12-03 19:03:41.367 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:03:41 compute-0 nova_compute[348325]: 2025-12-03 19:03:41.391 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:03:41 compute-0 nova_compute[348325]: 2025-12-03 19:03:41.423 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:03:41 compute-0 nova_compute[348325]: 2025-12-03 19:03:41.425 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Dec  3 19:03:42 compute-0 nova_compute[348325]: 2025-12-03 19:03:42.142 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764788607.1407332, fd1bf28c-ce00-44df-b134-5fa073e2246d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:03:42 compute-0 nova_compute[348325]: 2025-12-03 19:03:42.143 348329 INFO nova.compute.manager [-] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] VM Stopped (Lifecycle Event)#033[00m
Dec  3 19:03:42 compute-0 nova_compute[348325]: 2025-12-03 19:03:42.168 348329 DEBUG nova.compute.manager [None req-42565e1e-32b0-46b4-934d-2637233028fd - - - - - -] [instance: fd1bf28c-ce00-44df-b134-5fa073e2246d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:03:42 compute-0 nova_compute[348325]: 2025-12-03 19:03:42.936 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:42 compute-0 podman[453153]: 2025-12-03 19:03:42.967100207 +0000 UTC m=+0.131262434 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 19:03:42 compute-0 podman[453154]: 2025-12-03 19:03:42.985359234 +0000 UTC m=+0.144769636 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec  3 19:03:43 compute-0 nova_compute[348325]: 2025-12-03 19:03:43.564 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:03:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:03:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:03:47 compute-0 nova_compute[348325]: 2025-12-03 19:03:47.640 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764788612.6380947, 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:03:47 compute-0 nova_compute[348325]: 2025-12-03 19:03:47.641 348329 INFO nova.compute.manager [-] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] VM Stopped (Lifecycle Event)#033[00m
Dec  3 19:03:47 compute-0 nova_compute[348325]: 2025-12-03 19:03:47.666 348329 DEBUG nova.compute.manager [None req-b1531892-baef-4727-96a3-cf391b67a05a - - - - - -] [instance: 3bb34e64-ac61-46f3-99eb-2fdd346a8ecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:03:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:03:47 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:03:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:03:47 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:03:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:03:47 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:03:47 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 792faa70-6a0a-4420-a0ac-55b04a588a30 does not exist
Dec  3 19:03:47 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b4c23e41-52a7-4d74-b1fb-d021a5dc9da4 does not exist
Dec  3 19:03:47 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev aea6674d-fc96-4542-92db-be459a5ee989 does not exist
Dec  3 19:03:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:03:47 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:03:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:03:47 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:03:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:03:47 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:03:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Dec  3 19:03:47 compute-0 nova_compute[348325]: 2025-12-03 19:03:47.939 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:48 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:03:48 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:03:48 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:03:48 compute-0 nova_compute[348325]: 2025-12-03 19:03:48.565 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:48 compute-0 podman[453467]: 2025-12-03 19:03:48.922642577 +0000 UTC m=+0.061267616 container create fafa5cd42bed1c74a92d16eea2e0be6f76b35decd53e42aa096ecbf61c0a41ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec  3 19:03:48 compute-0 systemd[1]: Started libpod-conmon-fafa5cd42bed1c74a92d16eea2e0be6f76b35decd53e42aa096ecbf61c0a41ca.scope.
Dec  3 19:03:48 compute-0 podman[453467]: 2025-12-03 19:03:48.900015385 +0000 UTC m=+0.038640444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:03:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:03:49 compute-0 podman[453467]: 2025-12-03 19:03:49.053639214 +0000 UTC m=+0.192264273 container init fafa5cd42bed1c74a92d16eea2e0be6f76b35decd53e42aa096ecbf61c0a41ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 19:03:49 compute-0 podman[453467]: 2025-12-03 19:03:49.072646367 +0000 UTC m=+0.211271406 container start fafa5cd42bed1c74a92d16eea2e0be6f76b35decd53e42aa096ecbf61c0a41ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:03:49 compute-0 podman[453467]: 2025-12-03 19:03:49.077540987 +0000 UTC m=+0.216166026 container attach fafa5cd42bed1c74a92d16eea2e0be6f76b35decd53e42aa096ecbf61c0a41ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 19:03:49 compute-0 crazy_williamson[453483]: 167 167
Dec  3 19:03:49 compute-0 podman[453467]: 2025-12-03 19:03:49.086573937 +0000 UTC m=+0.225198976 container died fafa5cd42bed1c74a92d16eea2e0be6f76b35decd53e42aa096ecbf61c0a41ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Dec  3 19:03:49 compute-0 systemd[1]: libpod-fafa5cd42bed1c74a92d16eea2e0be6f76b35decd53e42aa096ecbf61c0a41ca.scope: Deactivated successfully.
Dec  3 19:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-503bab882cd5b533109c84b65ae4fa926f45f4101a200629901860c7a0096734-merged.mount: Deactivated successfully.
Dec  3 19:03:49 compute-0 podman[453467]: 2025-12-03 19:03:49.156200897 +0000 UTC m=+0.294825936 container remove fafa5cd42bed1c74a92d16eea2e0be6f76b35decd53e42aa096ecbf61c0a41ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec  3 19:03:49 compute-0 systemd[1]: libpod-conmon-fafa5cd42bed1c74a92d16eea2e0be6f76b35decd53e42aa096ecbf61c0a41ca.scope: Deactivated successfully.
Dec  3 19:03:49 compute-0 podman[453507]: 2025-12-03 19:03:49.418798956 +0000 UTC m=+0.073505675 container create 0413a5aef2632129d00fc67032ebb4856a15c1a38fa9d6df812840a2742843a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:03:49 compute-0 podman[453507]: 2025-12-03 19:03:49.379327293 +0000 UTC m=+0.034034042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:03:49 compute-0 systemd[1]: Started libpod-conmon-0413a5aef2632129d00fc67032ebb4856a15c1a38fa9d6df812840a2742843a4.scope.
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.492 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.494 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.494 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.495 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.496 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.498 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:03:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43a89279693b70a5651903df968bccd0967a4e2129aa8f385f3069b80f88861/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43a89279693b70a5651903df968bccd0967a4e2129aa8f385f3069b80f88861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43a89279693b70a5651903df968bccd0967a4e2129aa8f385f3069b80f88861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43a89279693b70a5651903df968bccd0967a4e2129aa8f385f3069b80f88861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43a89279693b70a5651903df968bccd0967a4e2129aa8f385f3069b80f88861/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.541 348329 DEBUG nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.557 348329 DEBUG nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec  3 19:03:49 compute-0 podman[453507]: 2025-12-03 19:03:49.5570096 +0000 UTC m=+0.211716339 container init 0413a5aef2632129d00fc67032ebb4856a15c1a38fa9d6df812840a2742843a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.559 348329 DEBUG nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Image id 29e9e995-880d-46f8-bdd0-149d4e107ea9 yields fingerprint fef3ab1a1bec0408321642c1c8701f42866ad11c _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.561 348329 INFO nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] image 29e9e995-880d-46f8-bdd0-149d4e107ea9 at (/var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c): checking#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.562 348329 DEBUG nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] image 29e9e995-880d-46f8-bdd0-149d4e107ea9 at (/var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Dec  3 19:03:49 compute-0 podman[453507]: 2025-12-03 19:03:49.573214206 +0000 UTC m=+0.227920925 container start 0413a5aef2632129d00fc67032ebb4856a15c1a38fa9d6df812840a2742843a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 19:03:49 compute-0 podman[453507]: 2025-12-03 19:03:49.577840058 +0000 UTC m=+0.232546797 container attach 0413a5aef2632129d00fc67032ebb4856a15c1a38fa9d6df812840a2742843a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.581 348329 DEBUG nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.582 348329 DEBUG nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] a4fc45c7-44e4-4b50-a3e0-98de13268f88 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.582 348329 WARNING nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Unknown base file: /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.583 348329 WARNING nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.584 348329 WARNING nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Unknown base file: /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.584 348329 INFO nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Active base files: /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.585 348329 INFO nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Removable base files: /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067 /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884 /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.586 348329 INFO nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/2a1fd6462a2f789b92c02c5037b663e095546067#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.587 348329 INFO nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b31c907458f7ba86221dfe584fd8b9e7faaaf884#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.587 348329 INFO nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/5cd3db9bb272569bd3ad2bd1318028e61915b864#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.588 348329 DEBUG nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.588 348329 DEBUG nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.589 348329 DEBUG nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec  3 19:03:49 compute-0 nova_compute[348325]: 2025-12-03 19:03:49.590 348329 INFO nova.virt.libvirt.imagecache [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec  3 19:03:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 9 op/s
Dec  3 19:03:50 compute-0 dazzling_wilson[453525]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:03:50 compute-0 dazzling_wilson[453525]: --> relative data size: 1.0
Dec  3 19:03:50 compute-0 dazzling_wilson[453525]: --> All data devices are unavailable
Dec  3 19:03:51 compute-0 systemd[1]: libpod-0413a5aef2632129d00fc67032ebb4856a15c1a38fa9d6df812840a2742843a4.scope: Deactivated successfully.
Dec  3 19:03:51 compute-0 podman[453507]: 2025-12-03 19:03:51.049073488 +0000 UTC m=+1.703780217 container died 0413a5aef2632129d00fc67032ebb4856a15c1a38fa9d6df812840a2742843a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 19:03:51 compute-0 systemd[1]: libpod-0413a5aef2632129d00fc67032ebb4856a15c1a38fa9d6df812840a2742843a4.scope: Consumed 1.402s CPU time.
Dec  3 19:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a43a89279693b70a5651903df968bccd0967a4e2129aa8f385f3069b80f88861-merged.mount: Deactivated successfully.
Dec  3 19:03:51 compute-0 podman[453507]: 2025-12-03 19:03:51.143563024 +0000 UTC m=+1.798269743 container remove 0413a5aef2632129d00fc67032ebb4856a15c1a38fa9d6df812840a2742843a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:03:51 compute-0 systemd[1]: libpod-conmon-0413a5aef2632129d00fc67032ebb4856a15c1a38fa9d6df812840a2742843a4.scope: Deactivated successfully.
Dec  3 19:03:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:52 compute-0 podman[453700]: 2025-12-03 19:03:52.132045 +0000 UTC m=+0.077781288 container create 7e08e4c7d01a64370bac5625ecad24e9d7be70944e5333943b69c4536441f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pasteur, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:03:52 compute-0 podman[453700]: 2025-12-03 19:03:52.095343845 +0000 UTC m=+0.041080113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:03:52 compute-0 systemd[1]: Started libpod-conmon-7e08e4c7d01a64370bac5625ecad24e9d7be70944e5333943b69c4536441f1f6.scope.
Dec  3 19:03:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:03:52 compute-0 podman[453700]: 2025-12-03 19:03:52.282931934 +0000 UTC m=+0.228668252 container init 7e08e4c7d01a64370bac5625ecad24e9d7be70944e5333943b69c4536441f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 19:03:52 compute-0 podman[453700]: 2025-12-03 19:03:52.302005849 +0000 UTC m=+0.247742137 container start 7e08e4c7d01a64370bac5625ecad24e9d7be70944e5333943b69c4536441f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:03:52 compute-0 podman[453700]: 2025-12-03 19:03:52.308473367 +0000 UTC m=+0.254209615 container attach 7e08e4c7d01a64370bac5625ecad24e9d7be70944e5333943b69c4536441f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pasteur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:03:52 compute-0 mystifying_pasteur[453716]: 167 167
Dec  3 19:03:52 compute-0 systemd[1]: libpod-7e08e4c7d01a64370bac5625ecad24e9d7be70944e5333943b69c4536441f1f6.scope: Deactivated successfully.
Dec  3 19:03:52 compute-0 podman[453700]: 2025-12-03 19:03:52.313168072 +0000 UTC m=+0.258904350 container died 7e08e4c7d01a64370bac5625ecad24e9d7be70944e5333943b69c4536441f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 19:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1776518fa9287c37deefb7c618c38a5497d416fb2eead57f8fe799563b5334a-merged.mount: Deactivated successfully.
Dec  3 19:03:52 compute-0 podman[453700]: 2025-12-03 19:03:52.393773959 +0000 UTC m=+0.339510217 container remove 7e08e4c7d01a64370bac5625ecad24e9d7be70944e5333943b69c4536441f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 19:03:52 compute-0 systemd[1]: libpod-conmon-7e08e4c7d01a64370bac5625ecad24e9d7be70944e5333943b69c4536441f1f6.scope: Deactivated successfully.
Dec  3 19:03:52 compute-0 podman[453739]: 2025-12-03 19:03:52.635816437 +0000 UTC m=+0.073111066 container create 70f7b175a468d70011b3b7eee1b39e1ef39c006e7c8961d89ee448f2a94188dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:03:52 compute-0 podman[453739]: 2025-12-03 19:03:52.608311935 +0000 UTC m=+0.045606534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:03:52 compute-0 systemd[1]: Started libpod-conmon-70f7b175a468d70011b3b7eee1b39e1ef39c006e7c8961d89ee448f2a94188dc.scope.
Dec  3 19:03:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dabc87cf43480bac680cce223fc852de3141e05c6c2f2754d10003efa7d0214/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dabc87cf43480bac680cce223fc852de3141e05c6c2f2754d10003efa7d0214/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dabc87cf43480bac680cce223fc852de3141e05c6c2f2754d10003efa7d0214/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dabc87cf43480bac680cce223fc852de3141e05c6c2f2754d10003efa7d0214/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:52 compute-0 podman[453739]: 2025-12-03 19:03:52.799978263 +0000 UTC m=+0.237272882 container init 70f7b175a468d70011b3b7eee1b39e1ef39c006e7c8961d89ee448f2a94188dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 19:03:52 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 19:03:52 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 19:03:52 compute-0 podman[453739]: 2025-12-03 19:03:52.818171347 +0000 UTC m=+0.255465976 container start 70f7b175a468d70011b3b7eee1b39e1ef39c006e7c8961d89ee448f2a94188dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 19:03:52 compute-0 podman[453739]: 2025-12-03 19:03:52.824530173 +0000 UTC m=+0.261824852 container attach 70f7b175a468d70011b3b7eee1b39e1ef39c006e7c8961d89ee448f2a94188dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:03:52 compute-0 nova_compute[348325]: 2025-12-03 19:03:52.946 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:53 compute-0 nova_compute[348325]: 2025-12-03 19:03:53.568 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:53 compute-0 distracted_knuth[453756]: {
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:    "0": [
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:        {
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "devices": [
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "/dev/loop3"
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            ],
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_name": "ceph_lv0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_size": "21470642176",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "name": "ceph_lv0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "tags": {
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.cluster_name": "ceph",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.crush_device_class": "",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.encrypted": "0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.osd_id": "0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.type": "block",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.vdo": "0"
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            },
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "type": "block",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "vg_name": "ceph_vg0"
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:        }
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:    ],
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:    "1": [
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:        {
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "devices": [
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "/dev/loop4"
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            ],
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_name": "ceph_lv1",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_size": "21470642176",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "name": "ceph_lv1",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "tags": {
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.cluster_name": "ceph",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.crush_device_class": "",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.encrypted": "0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.osd_id": "1",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.type": "block",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.vdo": "0"
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            },
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "type": "block",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "vg_name": "ceph_vg1"
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:        }
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:    ],
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:    "2": [
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:        {
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "devices": [
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "/dev/loop5"
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            ],
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_name": "ceph_lv2",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_size": "21470642176",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "name": "ceph_lv2",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "tags": {
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.cluster_name": "ceph",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.crush_device_class": "",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.encrypted": "0",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.osd_id": "2",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.type": "block",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:                "ceph.vdo": "0"
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            },
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "type": "block",
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:            "vg_name": "ceph_vg2"
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:        }
Dec  3 19:03:53 compute-0 distracted_knuth[453756]:    ]
Dec  3 19:03:53 compute-0 distracted_knuth[453756]: }
Dec  3 19:03:53 compute-0 systemd[1]: libpod-70f7b175a468d70011b3b7eee1b39e1ef39c006e7c8961d89ee448f2a94188dc.scope: Deactivated successfully.
Dec  3 19:03:53 compute-0 podman[453739]: 2025-12-03 19:03:53.662128396 +0000 UTC m=+1.099423055 container died 70f7b175a468d70011b3b7eee1b39e1ef39c006e7c8961d89ee448f2a94188dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 19:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dabc87cf43480bac680cce223fc852de3141e05c6c2f2754d10003efa7d0214-merged.mount: Deactivated successfully.
Dec  3 19:03:53 compute-0 podman[453739]: 2025-12-03 19:03:53.750949964 +0000 UTC m=+1.188244563 container remove 70f7b175a468d70011b3b7eee1b39e1ef39c006e7c8961d89ee448f2a94188dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 19:03:53 compute-0 systemd[1]: libpod-conmon-70f7b175a468d70011b3b7eee1b39e1ef39c006e7c8961d89ee448f2a94188dc.scope: Deactivated successfully.
Dec  3 19:03:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:54 compute-0 podman[453915]: 2025-12-03 19:03:54.616709306 +0000 UTC m=+0.067989851 container create be84da17a4f4151704e7e7ef0fc7bf1ff01c95e7a369c5f4e8bd0c209a4d47be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 19:03:54 compute-0 podman[453915]: 2025-12-03 19:03:54.580151553 +0000 UTC m=+0.031432078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:03:54 compute-0 systemd[1]: Started libpod-conmon-be84da17a4f4151704e7e7ef0fc7bf1ff01c95e7a369c5f4e8bd0c209a4d47be.scope.
Dec  3 19:03:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:03:54 compute-0 podman[453915]: 2025-12-03 19:03:54.753517905 +0000 UTC m=+0.204798510 container init be84da17a4f4151704e7e7ef0fc7bf1ff01c95e7a369c5f4e8bd0c209a4d47be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:03:54 compute-0 podman[453915]: 2025-12-03 19:03:54.771010932 +0000 UTC m=+0.222291437 container start be84da17a4f4151704e7e7ef0fc7bf1ff01c95e7a369c5f4e8bd0c209a4d47be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 19:03:54 compute-0 podman[453915]: 2025-12-03 19:03:54.775409879 +0000 UTC m=+0.226690434 container attach be84da17a4f4151704e7e7ef0fc7bf1ff01c95e7a369c5f4e8bd0c209a4d47be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 19:03:54 compute-0 systemd[1]: libpod-be84da17a4f4151704e7e7ef0fc7bf1ff01c95e7a369c5f4e8bd0c209a4d47be.scope: Deactivated successfully.
Dec  3 19:03:54 compute-0 agitated_germain[453931]: 167 167
Dec  3 19:03:54 compute-0 conmon[453931]: conmon be84da17a4f4151704e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be84da17a4f4151704e7e7ef0fc7bf1ff01c95e7a369c5f4e8bd0c209a4d47be.scope/container/memory.events
Dec  3 19:03:54 compute-0 podman[453915]: 2025-12-03 19:03:54.790256092 +0000 UTC m=+0.241536597 container died be84da17a4f4151704e7e7ef0fc7bf1ff01c95e7a369c5f4e8bd0c209a4d47be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 19:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f194c3ae0fdb0f719cf65b37a1bc91ad2113760a0bfaaca5925c3992c2fc12ea-merged.mount: Deactivated successfully.
Dec  3 19:03:54 compute-0 podman[453915]: 2025-12-03 19:03:54.84837915 +0000 UTC m=+0.299659675 container remove be84da17a4f4151704e7e7ef0fc7bf1ff01c95e7a369c5f4e8bd0c209a4d47be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 19:03:54 compute-0 systemd[1]: libpod-conmon-be84da17a4f4151704e7e7ef0fc7bf1ff01c95e7a369c5f4e8bd0c209a4d47be.scope: Deactivated successfully.
Dec  3 19:03:55 compute-0 podman[453955]: 2025-12-03 19:03:55.080440184 +0000 UTC m=+0.060785234 container create 94799c9d6e23bb79e3a32f58836ecdd861f27de8e06ec2eb99913efb5ce52ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rosalind, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 19:03:55 compute-0 systemd[1]: Started libpod-conmon-94799c9d6e23bb79e3a32f58836ecdd861f27de8e06ec2eb99913efb5ce52ae5.scope.
Dec  3 19:03:55 compute-0 podman[453955]: 2025-12-03 19:03:55.054772308 +0000 UTC m=+0.035071687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:03:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700dc4c2ab3a29d825cb5e4a6787fc6556d89b66b7dcf4e72a27b122d79bf05b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700dc4c2ab3a29d825cb5e4a6787fc6556d89b66b7dcf4e72a27b122d79bf05b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700dc4c2ab3a29d825cb5e4a6787fc6556d89b66b7dcf4e72a27b122d79bf05b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700dc4c2ab3a29d825cb5e4a6787fc6556d89b66b7dcf4e72a27b122d79bf05b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:03:55 compute-0 podman[453955]: 2025-12-03 19:03:55.204038611 +0000 UTC m=+0.184338010 container init 94799c9d6e23bb79e3a32f58836ecdd861f27de8e06ec2eb99913efb5ce52ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rosalind, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 19:03:55 compute-0 podman[453955]: 2025-12-03 19:03:55.229565784 +0000 UTC m=+0.209865153 container start 94799c9d6e23bb79e3a32f58836ecdd861f27de8e06ec2eb99913efb5ce52ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:03:55 compute-0 podman[453955]: 2025-12-03 19:03:55.235405217 +0000 UTC m=+0.215704566 container attach 94799c9d6e23bb79e3a32f58836ecdd861f27de8e06ec2eb99913efb5ce52ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 19:03:55 compute-0 podman[453968]: 2025-12-03 19:03:55.236835442 +0000 UTC m=+0.102525985 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 19:03:55 compute-0 podman[453971]: 2025-12-03 19:03:55.241979286 +0000 UTC m=+0.104773517 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Dec  3 19:03:55 compute-0 podman[453967]: 2025-12-03 19:03:55.251229173 +0000 UTC m=+0.114452885 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:03:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:56 compute-0 focused_rosalind[453987]: {
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "osd_id": 1,
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "type": "bluestore"
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:    },
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "osd_id": 2,
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "type": "bluestore"
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:    },
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "osd_id": 0,
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:        "type": "bluestore"
Dec  3 19:03:56 compute-0 focused_rosalind[453987]:    }
Dec  3 19:03:56 compute-0 focused_rosalind[453987]: }
Dec  3 19:03:56 compute-0 systemd[1]: libpod-94799c9d6e23bb79e3a32f58836ecdd861f27de8e06ec2eb99913efb5ce52ae5.scope: Deactivated successfully.
Dec  3 19:03:56 compute-0 podman[453955]: 2025-12-03 19:03:56.26308078 +0000 UTC m=+1.243380129 container died 94799c9d6e23bb79e3a32f58836ecdd861f27de8e06ec2eb99913efb5ce52ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:03:56 compute-0 systemd[1]: libpod-94799c9d6e23bb79e3a32f58836ecdd861f27de8e06ec2eb99913efb5ce52ae5.scope: Consumed 1.028s CPU time.
Dec  3 19:03:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-700dc4c2ab3a29d825cb5e4a6787fc6556d89b66b7dcf4e72a27b122d79bf05b-merged.mount: Deactivated successfully.
Dec  3 19:03:56 compute-0 podman[453955]: 2025-12-03 19:03:56.338981812 +0000 UTC m=+1.319281161 container remove 94799c9d6e23bb79e3a32f58836ecdd861f27de8e06ec2eb99913efb5ce52ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 19:03:56 compute-0 systemd[1]: libpod-conmon-94799c9d6e23bb79e3a32f58836ecdd861f27de8e06ec2eb99913efb5ce52ae5.scope: Deactivated successfully.
Dec  3 19:03:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:03:56 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:03:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:03:56 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:03:56 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9c56ddad-7cba-440d-ae29-d73d81f6c243 does not exist
Dec  3 19:03:56 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 2117c5c8-0471-4a45-bf2b-75a5e7aa7e14 does not exist
Dec  3 19:03:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:03:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:03:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:57 compute-0 nova_compute[348325]: 2025-12-03 19:03:57.949 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:58 compute-0 nova_compute[348325]: 2025-12-03 19:03:58.572 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:03:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:03:59 compute-0 podman[158200]: time="2025-12-03T19:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:03:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:03:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8652 "" "Go-http-client/1.1"
Dec  3 19:03:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:03:59 compute-0 podman[454126]: 2025-12-03 19:03:59.933967927 +0000 UTC m=+0.097958671 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  3 19:03:59 compute-0 podman[454127]: 2025-12-03 19:03:59.938766135 +0000 UTC m=+0.086390030 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 19:03:59 compute-0 podman[454125]: 2025-12-03 19:03:59.953130715 +0000 UTC m=+0.110297663 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., release=1214.1726694543, distribution-scope=public, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, version=9.4, container_name=kepler, io.buildah.version=1.29.0, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 19:04:01 compute-0 openstack_network_exporter[365222]: ERROR   19:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:04:01 compute-0 openstack_network_exporter[365222]: ERROR   19:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:04:01 compute-0 openstack_network_exporter[365222]: ERROR   19:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:04:01 compute-0 openstack_network_exporter[365222]: ERROR   19:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:04:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:04:01 compute-0 openstack_network_exporter[365222]: ERROR   19:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:04:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:04:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:02 compute-0 nova_compute[348325]: 2025-12-03 19:04:02.952 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:03 compute-0 nova_compute[348325]: 2025-12-03 19:04:03.575 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:07 compute-0 nova_compute[348325]: 2025-12-03 19:04:07.956 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:08 compute-0 nova_compute[348325]: 2025-12-03 19:04:08.578 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:09 compute-0 podman[454178]: 2025-12-03 19:04:09.978007461 +0000 UTC m=+0.135122389 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 19:04:11 compute-0 ovn_controller[89305]: 2025-12-03T19:04:11Z|00174|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec  3 19:04:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:12 compute-0 nova_compute[348325]: 2025-12-03 19:04:12.961 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:13 compute-0 nova_compute[348325]: 2025-12-03 19:04:13.580 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:04:13 compute-0 podman[454203]: 2025-12-03 19:04:13.989100163 +0000 UTC m=+0.133400647 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  3 19:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:04:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:04:14 compute-0 podman[454202]: 2025-12-03 19:04:14.00742637 +0000 UTC m=+0.175046994 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:04:14
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', 'images', '.rgw.root', 'volumes']
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:04:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:04:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.097 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.098 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.129 348329 DEBUG nova.compute.manager [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.225 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.226 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.240 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.241 348329 INFO nova.compute.claims [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.399 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:04:15 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:04:15 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710735621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.917 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.931 348329 DEBUG nova.compute.provider_tree [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:04:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.956 348329 DEBUG nova.scheduler.client.report [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.990 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:15 compute-0 nova_compute[348325]: 2025-12-03 19:04:15.992 348329 DEBUG nova.compute.manager [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.073 348329 DEBUG nova.compute.manager [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.075 348329 DEBUG nova.network.neutron [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.103 348329 INFO nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.146 348329 DEBUG nova.compute.manager [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.580 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.641 348329 DEBUG nova.compute.manager [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.644 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.645 348329 INFO nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Creating image(s)#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.706 348329 DEBUG nova.storage.rbd_utils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.778 348329 DEBUG nova.storage.rbd_utils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.837 348329 DEBUG nova.storage.rbd_utils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.857 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.954 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.957 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "fef3ab1a1bec0408321642c1c8701f42866ad11c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.958 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "fef3ab1a1bec0408321642c1c8701f42866ad11c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:16 compute-0 nova_compute[348325]: 2025-12-03 19:04:16.959 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "fef3ab1a1bec0408321642c1c8701f42866ad11c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.005 348329 DEBUG nova.storage.rbd_utils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.017 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.247 348329 DEBUG nova.policy [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.468 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/fef3ab1a1bec0408321642c1c8701f42866ad11c a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.658 348329 DEBUG nova.storage.rbd_utils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] resizing rbd image a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.904 348329 DEBUG nova.objects.instance [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lazy-loading 'migration_context' on Instance uuid a364994c-8442-4a4c-bd6b-f3a2d31e4483 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.925 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.926 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Ensure instance console log exists: /var/lib/nova/instances/a364994c-8442-4a4c-bd6b-f3a2d31e4483/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.928 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.929 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.929 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 173 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 595 KiB/s wr, 1 op/s
Dec  3 19:04:17 compute-0 nova_compute[348325]: 2025-12-03 19:04:17.965 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:18 compute-0 nova_compute[348325]: 2025-12-03 19:04:18.085 348329 DEBUG nova.network.neutron [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Successfully created port: b761f609-2787-4aa2-9b1c-cc5b41d2373d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 19:04:18 compute-0 nova_compute[348325]: 2025-12-03 19:04:18.582 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:04:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2751 syncs, 3.70 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2633 writes, 10K keys, 2633 commit groups, 1.0 writes per commit group, ingest: 11.43 MB, 0.02 MB/s#012Interval WAL: 2633 writes, 1038 syncs, 2.54 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:04:19 compute-0 nova_compute[348325]: 2025-12-03 19:04:19.562 348329 DEBUG nova.network.neutron [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Successfully updated port: b761f609-2787-4aa2-9b1c-cc5b41d2373d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 19:04:19 compute-0 nova_compute[348325]: 2025-12-03 19:04:19.578 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:04:19 compute-0 nova_compute[348325]: 2025-12-03 19:04:19.580 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquired lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:04:19 compute-0 nova_compute[348325]: 2025-12-03 19:04:19.580 348329 DEBUG nova.network.neutron [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 19:04:19 compute-0 nova_compute[348325]: 2025-12-03 19:04:19.700 348329 DEBUG nova.compute.manager [req-1ae750ae-7bf9-49b4-92fc-749078e927d4 req-01e78a6d-4153-4e2e-a027-a6592aa79dd6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Received event network-changed-b761f609-2787-4aa2-9b1c-cc5b41d2373d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:04:19 compute-0 nova_compute[348325]: 2025-12-03 19:04:19.701 348329 DEBUG nova.compute.manager [req-1ae750ae-7bf9-49b4-92fc-749078e927d4 req-01e78a6d-4153-4e2e-a027-a6592aa79dd6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Refreshing instance network info cache due to event network-changed-b761f609-2787-4aa2-9b1c-cc5b41d2373d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 19:04:19 compute-0 nova_compute[348325]: 2025-12-03 19:04:19.702 348329 DEBUG oslo_concurrency.lockutils [req-1ae750ae-7bf9-49b4-92fc-749078e927d4 req-01e78a6d-4153-4e2e-a027-a6592aa79dd6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:04:19 compute-0 nova_compute[348325]: 2025-12-03 19:04:19.883 348329 DEBUG nova.network.neutron [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 19:04:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 194 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.689 348329 DEBUG nova.network.neutron [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updating instance_info_cache with network_info: [{"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.708 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Releasing lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.709 348329 DEBUG nova.compute.manager [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Instance network_info: |[{"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.710 348329 DEBUG oslo_concurrency.lockutils [req-1ae750ae-7bf9-49b4-92fc-749078e927d4 req-01e78a6d-4153-4e2e-a027-a6592aa79dd6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquired lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.710 348329 DEBUG nova.network.neutron [req-1ae750ae-7bf9-49b4-92fc-749078e927d4 req-01e78a6d-4153-4e2e-a027-a6592aa79dd6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Refreshing network info cache for port b761f609-2787-4aa2-9b1c-cc5b41d2373d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.713 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Start _get_guest_xml network_info=[{"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:59:15Z,direct_url=<?>,disk_format='qcow2',id=29e9e995-880d-46f8-bdd0-149d4e107ea9,min_disk=0,min_ram=0,name='tempest-scenario-img--508019753',owner='d29cef7b24ee4d30b2b3f5027ec6aafb',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:59:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.723 348329 WARNING nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.736 348329 DEBUG nova.virt.libvirt.host [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.737 348329 DEBUG nova.virt.libvirt.host [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.743 348329 DEBUG nova.virt.libvirt.host [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.744 348329 DEBUG nova.virt.libvirt.host [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.745 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.746 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T18:56:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a94cfbfb-a20a-4689-ac91-e7436db75880',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T18:59:15Z,direct_url=<?>,disk_format='qcow2',id=29e9e995-880d-46f8-bdd0-149d4e107ea9,min_disk=0,min_ram=0,name='tempest-scenario-img--508019753',owner='d29cef7b24ee4d30b2b3f5027ec6aafb',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T18:59:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.747 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.747 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.747 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.748 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.748 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.749 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.749 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.750 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.750 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.751 348329 DEBUG nova.virt.hardware [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 19:04:20 compute-0 nova_compute[348325]: 2025-12-03 19:04:20.755 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:04:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 19:04:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/939292823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.221 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.274 348329 DEBUG nova.storage.rbd_utils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.285 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 19:04:21 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2753713616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.752 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.756 348329 DEBUG nova.virt.libvirt.vif [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T19:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5',id=15,image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d721c97c-b9eb-44f9-a826-1b99239b172a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d29cef7b24ee4d30b2b3f5027ec6aafb',ramdisk_id='',reservation_id='r-jne042ye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-463817161',owner_user_name='tempest-PrometheusGabbiTest-463817161-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T19:04:16Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5b5e6c2a7cce4e3b96611203def80123',uuid=a364994c-8442-4a4c-bd6b-f3a2d31e4483,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.757 348329 DEBUG nova.network.os_vif_util [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converting VIF {"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.759 348329 DEBUG nova.network.os_vif_util [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:da:52,bridge_name='br-int',has_traffic_filtering=True,id=b761f609-2787-4aa2-9b1c-cc5b41d2373d,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb761f609-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.762 348329 DEBUG nova.objects.instance [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lazy-loading 'pci_devices' on Instance uuid a364994c-8442-4a4c-bd6b-f3a2d31e4483 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.800 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] End _get_guest_xml xml=<domain type="kvm">
Dec  3 19:04:21 compute-0 nova_compute[348325]:  <uuid>a364994c-8442-4a4c-bd6b-f3a2d31e4483</uuid>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  <name>instance-0000000f</name>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  <memory>131072</memory>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  <vcpu>1</vcpu>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  <metadata>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <nova:name>te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5</nova:name>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <nova:creationTime>2025-12-03 19:04:20</nova:creationTime>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <nova:flavor name="m1.nano">
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <nova:memory>128</nova:memory>
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <nova:disk>1</nova:disk>
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <nova:swap>0</nova:swap>
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <nova:vcpus>1</nova:vcpus>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      </nova:flavor>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <nova:owner>
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <nova:user uuid="5b5e6c2a7cce4e3b96611203def80123">tempest-PrometheusGabbiTest-463817161-project-member</nova:user>
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <nova:project uuid="d29cef7b24ee4d30b2b3f5027ec6aafb">tempest-PrometheusGabbiTest-463817161</nova:project>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      </nova:owner>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <nova:root type="image" uuid="29e9e995-880d-46f8-bdd0-149d4e107ea9"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <nova:ports>
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <nova:port uuid="b761f609-2787-4aa2-9b1c-cc5b41d2373d">
Dec  3 19:04:21 compute-0 nova_compute[348325]:          <nova:ip type="fixed" address="10.100.3.71" ipVersion="4"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:        </nova:port>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      </nova:ports>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    </nova:instance>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  </metadata>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  <sysinfo type="smbios">
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <system>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <entry name="manufacturer">RDO</entry>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <entry name="product">OpenStack Compute</entry>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <entry name="serial">a364994c-8442-4a4c-bd6b-f3a2d31e4483</entry>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <entry name="uuid">a364994c-8442-4a4c-bd6b-f3a2d31e4483</entry>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <entry name="family">Virtual Machine</entry>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    </system>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  </sysinfo>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  <os>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <boot dev="hd"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <smbios mode="sysinfo"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  </os>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  <features>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <acpi/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <apic/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <vmcoreinfo/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  </features>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  <clock offset="utc">
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <timer name="hpet" present="no"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  </clock>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  <cpu mode="host-model" match="exact">
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  </cpu>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  <devices>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <disk type="network" device="disk">
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk">
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      </source>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      </auth>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <target dev="vda" bus="virtio"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    </disk>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <disk type="network" device="cdrom">
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <driver type="raw" cache="none"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <source protocol="rbd" name="vms/a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk.config">
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <host name="192.168.122.100" port="6789"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      </source>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <auth username="openstack">
Dec  3 19:04:21 compute-0 nova_compute[348325]:        <secret type="ceph" uuid="c1caf3ba-b2a5-5005-a11e-e955c344dccc"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      </auth>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <target dev="sda" bus="sata"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    </disk>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <interface type="ethernet">
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <mac address="fa:16:3e:2c:da:52"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <mtu size="1442"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <target dev="tapb761f609-27"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    </interface>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <serial type="pty">
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <log file="/var/lib/nova/instances/a364994c-8442-4a4c-bd6b-f3a2d31e4483/console.log" append="off"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    </serial>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <video>
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <model type="virtio"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    </video>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <input type="tablet" bus="usb"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <rng model="virtio">
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <backend model="random">/dev/urandom</backend>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    </rng>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <controller type="usb" index="0"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    <memballoon model="virtio">
Dec  3 19:04:21 compute-0 nova_compute[348325]:      <stats period="10"/>
Dec  3 19:04:21 compute-0 nova_compute[348325]:    </memballoon>
Dec  3 19:04:21 compute-0 nova_compute[348325]:  </devices>
Dec  3 19:04:21 compute-0 nova_compute[348325]: </domain>
Dec  3 19:04:21 compute-0 nova_compute[348325]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.803 348329 DEBUG nova.compute.manager [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Preparing to wait for external event network-vif-plugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.804 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.805 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.806 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.808 348329 DEBUG nova.virt.libvirt.vif [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T19:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5',id=15,image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d721c97c-b9eb-44f9-a826-1b99239b172a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d29cef7b24ee4d30b2b3f5027ec6aafb',ramdisk_id='',reservation_id='r-jne042ye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-463817161',owner_user_name='tempest-PrometheusGabbiTest-463817161-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T19:04:16Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5b5e6c2a7cce4e3b96611203def80123',uuid=a364994c-8442-4a4c-bd6b-f3a2d31e4483,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.809 348329 DEBUG nova.network.os_vif_util [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converting VIF {"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.811 348329 DEBUG nova.network.os_vif_util [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:da:52,bridge_name='br-int',has_traffic_filtering=True,id=b761f609-2787-4aa2-9b1c-cc5b41d2373d,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb761f609-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.812 348329 DEBUG os_vif [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:da:52,bridge_name='br-int',has_traffic_filtering=True,id=b761f609-2787-4aa2-9b1c-cc5b41d2373d,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb761f609-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.813 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.815 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.816 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.822 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.823 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb761f609-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.824 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb761f609-27, col_values=(('external_ids', {'iface-id': 'b761f609-2787-4aa2-9b1c-cc5b41d2373d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:da:52', 'vm-uuid': 'a364994c-8442-4a4c-bd6b-f3a2d31e4483'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:04:21 compute-0 NetworkManager[49087]: <info>  [1764788661.8303] manager: (tapb761f609-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.830 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.835 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.839 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.841 348329 INFO os_vif [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:da:52,bridge_name='br-int',has_traffic_filtering=True,id=b761f609-2787-4aa2-9b1c-cc5b41d2373d,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb761f609-27')#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.910 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.912 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.912 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] No VIF found with MAC fa:16:3e:2c:da:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.914 348329 INFO nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Using config drive#033[00m
Dec  3 19:04:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 203 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  3 19:04:21 compute-0 nova_compute[348325]: 2025-12-03 19:04:21.963 348329 DEBUG nova.storage.rbd_utils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:04:22 compute-0 nova_compute[348325]: 2025-12-03 19:04:22.353 348329 INFO nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Creating config drive at /var/lib/nova/instances/a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.config#033[00m
Dec  3 19:04:22 compute-0 nova_compute[348325]: 2025-12-03 19:04:22.362 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxzmvthwh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:04:22 compute-0 nova_compute[348325]: 2025-12-03 19:04:22.438 348329 DEBUG nova.network.neutron [req-1ae750ae-7bf9-49b4-92fc-749078e927d4 req-01e78a6d-4153-4e2e-a027-a6592aa79dd6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updated VIF entry in instance network info cache for port b761f609-2787-4aa2-9b1c-cc5b41d2373d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 19:04:22 compute-0 nova_compute[348325]: 2025-12-03 19:04:22.440 348329 DEBUG nova.network.neutron [req-1ae750ae-7bf9-49b4-92fc-749078e927d4 req-01e78a6d-4153-4e2e-a027-a6592aa79dd6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updating instance_info_cache with network_info: [{"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:04:22 compute-0 nova_compute[348325]: 2025-12-03 19:04:22.458 348329 DEBUG oslo_concurrency.lockutils [req-1ae750ae-7bf9-49b4-92fc-749078e927d4 req-01e78a6d-4153-4e2e-a027-a6592aa79dd6 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Releasing lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:04:22 compute-0 nova_compute[348325]: 2025-12-03 19:04:22.515 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxzmvthwh" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:04:22 compute-0 nova_compute[348325]: 2025-12-03 19:04:22.573 348329 DEBUG nova.storage.rbd_utils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] rbd image a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 19:04:22 compute-0 nova_compute[348325]: 2025-12-03 19:04:22.584 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.config a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:04:22 compute-0 nova_compute[348325]: 2025-12-03 19:04:22.871 348329 DEBUG oslo_concurrency.processutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.config a364994c-8442-4a4c-bd6b-f3a2d31e4483_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:04:22 compute-0 nova_compute[348325]: 2025-12-03 19:04:22.873 348329 INFO nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Deleting local config drive /var/lib/nova/instances/a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.config because it was imported into RBD.#033[00m
Dec  3 19:04:22 compute-0 kernel: tapb761f609-27: entered promiscuous mode
Dec  3 19:04:22 compute-0 NetworkManager[49087]: <info>  [1764788662.9938] manager: (tapb761f609-27): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Dec  3 19:04:22 compute-0 ovn_controller[89305]: 2025-12-03T19:04:22Z|00175|binding|INFO|Claiming lport b761f609-2787-4aa2-9b1c-cc5b41d2373d for this chassis.
Dec  3 19:04:22 compute-0 ovn_controller[89305]: 2025-12-03T19:04:22Z|00176|binding|INFO|b761f609-2787-4aa2-9b1c-cc5b41d2373d: Claiming fa:16:3e:2c:da:52 10.100.3.71
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:22.999 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.009 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:da:52 10.100.3.71'], port_security=['fa:16:3e:2c:da:52 10.100.3.71'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.71/16', 'neutron:device_id': 'a364994c-8442-4a4c-bd6b-f3a2d31e4483', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04e258c0-609e-4010-a306-af20506c3a9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e47f9e7-514d-4fc2-9225-d05512482dee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b71f2b6d-7f9c-430c-a162-af2bdc131d68, chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=b761f609-2787-4aa2-9b1c-cc5b41d2373d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.011 286999 INFO neutron.agent.ovn.metadata.agent [-] Port b761f609-2787-4aa2-9b1c-cc5b41d2373d in datapath 04e258c0-609e-4010-a306-af20506c3a9d bound to our chassis#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.014 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 04e258c0-609e-4010-a306-af20506c3a9d#033[00m
Dec  3 19:04:23 compute-0 ovn_controller[89305]: 2025-12-03T19:04:23Z|00177|binding|INFO|Setting lport b761f609-2787-4aa2-9b1c-cc5b41d2373d up in Southbound
Dec  3 19:04:23 compute-0 ovn_controller[89305]: 2025-12-03T19:04:23Z|00178|binding|INFO|Setting lport b761f609-2787-4aa2-9b1c-cc5b41d2373d ovn-installed in OVS
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.021 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.041 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.045 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[55fa30df-b3d6-4e16-bba5-3c3f451dd0f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:04:23 compute-0 systemd-udevd[454569]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 19:04:23 compute-0 systemd-machined[138702]: New machine qemu-16-instance-0000000f.
Dec  3 19:04:23 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Dec  3 19:04:23 compute-0 NetworkManager[49087]: <info>  [1764788663.0830] device (tapb761f609-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 19:04:23 compute-0 NetworkManager[49087]: <info>  [1764788663.0871] device (tapb761f609-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.100 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[afec14f9-35ac-4178-b2a6-c43b41f331fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.104 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a20614-a795-4d2d-97e6-f6ea1e088ae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.143 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[bd135b82-b924-418d-8e19-d0173ffb83bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.170 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e26c88ee-e9f9-40b3-b40e-1cfe19bd31f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap04e258c0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5b:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666585, 'reachable_time': 27190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 454577, 'error': None, 'target': 'ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.191 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec2aa57-66e3-473a-9408-3c5280ac6945]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap04e258c0-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666603, 'tstamp': 666603}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 454583, 'error': None, 'target': 'ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap04e258c0-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666607, 'tstamp': 666607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 454583, 'error': None, 'target': 'ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.194 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04e258c0-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.196 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.197 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.197 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap04e258c0-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.198 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.198 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap04e258c0-60, col_values=(('external_ids', {'iface-id': 'f82febe8-1e88-4e67-9f7a-5af5921c9877'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.198 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.305 348329 DEBUG nova.compute.manager [req-4e9a468d-2299-4cf7-a413-e6a281f9ceae req-b17af74f-3f04-4ae5-85b4-2a7f12a3164c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Received event network-vif-plugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.305 348329 DEBUG oslo_concurrency.lockutils [req-4e9a468d-2299-4cf7-a413-e6a281f9ceae req-b17af74f-3f04-4ae5-85b4-2a7f12a3164c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.305 348329 DEBUG oslo_concurrency.lockutils [req-4e9a468d-2299-4cf7-a413-e6a281f9ceae req-b17af74f-3f04-4ae5-85b4-2a7f12a3164c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.305 348329 DEBUG oslo_concurrency.lockutils [req-4e9a468d-2299-4cf7-a413-e6a281f9ceae req-b17af74f-3f04-4ae5-85b4-2a7f12a3164c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.306 348329 DEBUG nova.compute.manager [req-4e9a468d-2299-4cf7-a413-e6a281f9ceae req-b17af74f-3f04-4ae5-85b4-2a7f12a3164c 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Processing event network-vif-plugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.360 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.361 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:04:23.361 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.583 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.608 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788663.6074188, a364994c-8442-4a4c-bd6b-f3a2d31e4483 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.609 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] VM Started (Lifecycle Event)#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.613 348329 DEBUG nova.compute.manager [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.623 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.628 348329 INFO nova.virt.libvirt.driver [-] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Instance spawned successfully.#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.629 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.633 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.640 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.660 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.661 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.662 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.662 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.663 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.664 348329 DEBUG nova.virt.libvirt.driver [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.673 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.673 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788663.607609, a364994c-8442-4a4c-bd6b-f3a2d31e4483 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.674 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] VM Paused (Lifecycle Event)#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.695 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.703 348329 DEBUG nova.virt.driver [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] Emitting event <LifecycleEvent: 1764788663.6194797, a364994c-8442-4a4c-bd6b-f3a2d31e4483 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.704 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] VM Resumed (Lifecycle Event)#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.718 348329 INFO nova.compute.manager [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Took 7.08 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.719 348329 DEBUG nova.compute.manager [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.727 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.741 348329 DEBUG nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.758 348329 INFO nova.compute.manager [None req-6ef90f6d-961a-4dca-96af-95aaab258bc9 - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.804 348329 INFO nova.compute.manager [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Took 8.62 seconds to build instance.#033[00m
Dec  3 19:04:23 compute-0 nova_compute[348325]: 2025-12-03 19:04:23.821 348329 DEBUG oslo_concurrency.lockutils [None req-c76b56ec-26d4-49c3-89ba-83d22a4ade8b 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  3 19:04:24 compute-0 nova_compute[348325]: 2025-12-03 19:04:24.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011037439156410757 of space, bias 1.0, pg target 0.3311231746923227 quantized to 32 (current 32)
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:04:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:04:25 compute-0 nova_compute[348325]: 2025-12-03 19:04:25.391 348329 DEBUG nova.compute.manager [req-ff3e5f53-d206-43d2-8b8c-2bfa31f12a87 req-b1f4f76f-d142-4257-b4b9-1206f922ab46 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Received event network-vif-plugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:04:25 compute-0 nova_compute[348325]: 2025-12-03 19:04:25.392 348329 DEBUG oslo_concurrency.lockutils [req-ff3e5f53-d206-43d2-8b8c-2bfa31f12a87 req-b1f4f76f-d142-4257-b4b9-1206f922ab46 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:25 compute-0 nova_compute[348325]: 2025-12-03 19:04:25.392 348329 DEBUG oslo_concurrency.lockutils [req-ff3e5f53-d206-43d2-8b8c-2bfa31f12a87 req-b1f4f76f-d142-4257-b4b9-1206f922ab46 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:25 compute-0 nova_compute[348325]: 2025-12-03 19:04:25.393 348329 DEBUG oslo_concurrency.lockutils [req-ff3e5f53-d206-43d2-8b8c-2bfa31f12a87 req-b1f4f76f-d142-4257-b4b9-1206f922ab46 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:25 compute-0 nova_compute[348325]: 2025-12-03 19:04:25.393 348329 DEBUG nova.compute.manager [req-ff3e5f53-d206-43d2-8b8c-2bfa31f12a87 req-b1f4f76f-d142-4257-b4b9-1206f922ab46 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] No waiting events found dispatching network-vif-plugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:04:25 compute-0 nova_compute[348325]: 2025-12-03 19:04:25.394 348329 WARNING nova.compute.manager [req-ff3e5f53-d206-43d2-8b8c-2bfa31f12a87 req-b1f4f76f-d142-4257-b4b9-1206f922ab46 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Received unexpected event network-vif-plugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d for instance with vm_state active and task_state None.#033[00m
Dec  3 19:04:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec  3 19:04:25 compute-0 podman[454627]: 2025-12-03 19:04:25.983014037 +0000 UTC m=+0.134327869 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec  3 19:04:25 compute-0 podman[454628]: 2025-12-03 19:04:25.984036143 +0000 UTC m=+0.117867208 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:04:26 compute-0 podman[454629]: 2025-12-03 19:04:26.002343249 +0000 UTC m=+0.139134627 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, io.openshift.expose-services=)
Dec  3 19:04:26 compute-0 nova_compute[348325]: 2025-12-03 19:04:26.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:26 compute-0 nova_compute[348325]: 2025-12-03 19:04:26.828 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 832 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Dec  3 19:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:04:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2780 syncs, 3.72 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1893 writes, 6388 keys, 1893 commit groups, 1.0 writes per commit group, ingest: 5.65 MB, 0.01 MB/s#012Interval WAL: 1893 writes, 822 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:04:28 compute-0 nova_compute[348325]: 2025-12-03 19:04:28.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:28 compute-0 nova_compute[348325]: 2025-12-03 19:04:28.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 19:04:28 compute-0 nova_compute[348325]: 2025-12-03 19:04:28.509 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 19:04:28 compute-0 nova_compute[348325]: 2025-12-03 19:04:28.510 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:28 compute-0 nova_compute[348325]: 2025-12-03 19:04:28.586 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:29 compute-0 nova_compute[348325]: 2025-12-03 19:04:29.521 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:29 compute-0 podman[158200]: time="2025-12-03T19:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:04:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:04:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Dec  3 19:04:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 93 op/s
Dec  3 19:04:30 compute-0 podman[454686]: 2025-12-03 19:04:30.907041322 +0000 UTC m=+0.066765041 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 19:04:30 compute-0 podman[454687]: 2025-12-03 19:04:30.921694889 +0000 UTC m=+0.073130856 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 19:04:30 compute-0 podman[454685]: 2025-12-03 19:04:30.937247588 +0000 UTC m=+0.100482413 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release=1214.1726694543, version=9.4, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, name=ubi9, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible)
Dec  3 19:04:31 compute-0 openstack_network_exporter[365222]: ERROR   19:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:04:31 compute-0 openstack_network_exporter[365222]: ERROR   19:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:04:31 compute-0 openstack_network_exporter[365222]: ERROR   19:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:04:31 compute-0 openstack_network_exporter[365222]: ERROR   19:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:04:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:04:31 compute-0 openstack_network_exporter[365222]: ERROR   19:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:04:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:04:31 compute-0 nova_compute[348325]: 2025-12-03 19:04:31.831 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 215 KiB/s wr, 86 op/s
Dec  3 19:04:32 compute-0 nova_compute[348325]: 2025-12-03 19:04:32.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:32 compute-0 nova_compute[348325]: 2025-12-03 19:04:32.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:04:32 compute-0 nova_compute[348325]: 2025-12-03 19:04:32.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:04:33 compute-0 nova_compute[348325]: 2025-12-03 19:04:33.198 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:04:33 compute-0 nova_compute[348325]: 2025-12-03 19:04:33.199 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:04:33 compute-0 nova_compute[348325]: 2025-12-03 19:04:33.199 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:04:33 compute-0 nova_compute[348325]: 2025-12-03 19:04:33.199 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:04:33 compute-0 nova_compute[348325]: 2025-12-03 19:04:33.588 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec  3 19:04:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:04:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 9455 writes, 37K keys, 9455 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9455 writes, 2473 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2028 writes, 7848 keys, 2028 commit groups, 1.0 writes per commit group, ingest: 8.51 MB, 0.01 MB/s#012Interval WAL: 2028 writes, 827 syncs, 2.45 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.246 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updating instance_info_cache with network_info: [{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.270 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.271 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.272 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.323 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Triggering sync for uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.336 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Triggering sync for uuid a364994c-8442-4a4c-bd6b-f3a2d31e4483 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.337 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.338 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.339 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.339 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.401 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.404 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:35 compute-0 nova_compute[348325]: 2025-12-03 19:04:35.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:04:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec  3 19:04:36 compute-0 nova_compute[348325]: 2025-12-03 19:04:36.834 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:37 compute-0 ceph-mgr[193091]: [devicehealth INFO root] Check health
Dec  3 19:04:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:04:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1444483221' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:04:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:04:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1444483221' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:04:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Dec  3 19:04:38 compute-0 nova_compute[348325]: 2025-12-03 19:04:38.591 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 36 op/s
Dec  3 19:04:40 compute-0 podman[454741]: 2025-12-03 19:04:40.972347604 +0000 UTC m=+0.132272600 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:04:41 compute-0 nova_compute[348325]: 2025-12-03 19:04:41.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:41 compute-0 nova_compute[348325]: 2025-12-03 19:04:41.524 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:41 compute-0 nova_compute[348325]: 2025-12-03 19:04:41.525 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:41 compute-0 nova_compute[348325]: 2025-12-03 19:04:41.525 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:41 compute-0 nova_compute[348325]: 2025-12-03 19:04:41.526 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:04:41 compute-0 nova_compute[348325]: 2025-12-03 19:04:41.527 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:04:41 compute-0 nova_compute[348325]: 2025-12-03 19:04:41.839 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 5 op/s
Dec  3 19:04:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:04:42 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1672134340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:04:42 compute-0 nova_compute[348325]: 2025-12-03 19:04:42.042 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:04:42 compute-0 nova_compute[348325]: 2025-12-03 19:04:42.178 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:04:42 compute-0 nova_compute[348325]: 2025-12-03 19:04:42.180 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:04:42 compute-0 nova_compute[348325]: 2025-12-03 19:04:42.195 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:04:42 compute-0 nova_compute[348325]: 2025-12-03 19:04:42.197 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:04:42 compute-0 nova_compute[348325]: 2025-12-03 19:04:42.678 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:04:42 compute-0 nova_compute[348325]: 2025-12-03 19:04:42.680 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3613MB free_disk=59.92190170288086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:04:42 compute-0 nova_compute[348325]: 2025-12-03 19:04:42.680 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:04:42 compute-0 nova_compute[348325]: 2025-12-03 19:04:42.681 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.003 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.004 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a364994c-8442-4a4c-bd6b-f3a2d31e4483 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.004 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.005 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.205 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.593 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:43 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:04:43 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1257289522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.650 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.662 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.688 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.727 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.728 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.729 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:43 compute-0 nova_compute[348325]: 2025-12-03 19:04:43.729 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 19:04:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:04:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:04:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:44 compute-0 nova_compute[348325]: 2025-12-03 19:04:44.736 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:04:44 compute-0 podman[454811]: 2025-12-03 19:04:44.811911339 +0000 UTC m=+0.113158783 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 19:04:44 compute-0 podman[454810]: 2025-12-03 19:04:44.873927403 +0000 UTC m=+0.172544173 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:04:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:46 compute-0 nova_compute[348325]: 2025-12-03 19:04:46.845 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:48 compute-0 nova_compute[348325]: 2025-12-03 19:04:48.595 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:51 compute-0 nova_compute[348325]: 2025-12-03 19:04:51.849 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:53 compute-0 ovn_controller[89305]: 2025-12-03T19:04:53Z|00179|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec  3 19:04:53 compute-0 nova_compute[348325]: 2025-12-03 19:04:53.598 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:56 compute-0 podman[454881]: 2025-12-03 19:04:56.83681104 +0000 UTC m=+0.105426354 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.tags=minimal rhel9)
Dec  3 19:04:56 compute-0 podman[454880]: 2025-12-03 19:04:56.839949277 +0000 UTC m=+0.121920107 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 19:04:56 compute-0 podman[454879]: 2025-12-03 19:04:56.839996468 +0000 UTC m=+0.123760312 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 19:04:56 compute-0 nova_compute[348325]: 2025-12-03 19:04:56.852 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:04:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:04:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:04:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:04:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:04:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:04:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:04:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:04:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:04:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:04:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:04:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 1c4d0c1c-e349-4ef3-820a-65f70bf6ccb2 does not exist
Dec  3 19:04:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 8ba3aa92-db26-49a8-9cb3-84b18f702f75 does not exist
Dec  3 19:04:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 7660c42f-7394-4cd0-b755-511b98939350 does not exist
Dec  3 19:04:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:04:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:04:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:04:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:04:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:04:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:04:58 compute-0 nova_compute[348325]: 2025-12-03 19:04:58.599 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:04:59 compute-0 podman[455299]: 2025-12-03 19:04:59.120733015 +0000 UTC m=+0.049377456 container create 90b644033e728fad487ea2d4567371884b72433a45150d4a3f01fb090546912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 19:04:59 compute-0 systemd[1]: Started libpod-conmon-90b644033e728fad487ea2d4567371884b72433a45150d4a3f01fb090546912d.scope.
Dec  3 19:04:59 compute-0 podman[455299]: 2025-12-03 19:04:59.098499653 +0000 UTC m=+0.027144104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:04:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:04:59 compute-0 podman[455299]: 2025-12-03 19:04:59.24916584 +0000 UTC m=+0.177810321 container init 90b644033e728fad487ea2d4567371884b72433a45150d4a3f01fb090546912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 19:04:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:04:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:04:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:04:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:04:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:04:59 compute-0 podman[455299]: 2025-12-03 19:04:59.261217315 +0000 UTC m=+0.189861756 container start 90b644033e728fad487ea2d4567371884b72433a45150d4a3f01fb090546912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 19:04:59 compute-0 objective_wright[455315]: 167 167
Dec  3 19:04:59 compute-0 systemd[1]: libpod-90b644033e728fad487ea2d4567371884b72433a45150d4a3f01fb090546912d.scope: Deactivated successfully.
Dec  3 19:04:59 compute-0 podman[455299]: 2025-12-03 19:04:59.280795763 +0000 UTC m=+0.209440214 container attach 90b644033e728fad487ea2d4567371884b72433a45150d4a3f01fb090546912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:04:59 compute-0 podman[455299]: 2025-12-03 19:04:59.28190239 +0000 UTC m=+0.210546851 container died 90b644033e728fad487ea2d4567371884b72433a45150d4a3f01fb090546912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  3 19:04:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ce9d307e2c160bb8330b793ab5f9eee69dac82de7178198e6dbecdd936e7dee-merged.mount: Deactivated successfully.
Dec  3 19:04:59 compute-0 podman[455299]: 2025-12-03 19:04:59.359237527 +0000 UTC m=+0.287881968 container remove 90b644033e728fad487ea2d4567371884b72433a45150d4a3f01fb090546912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:04:59 compute-0 systemd[1]: libpod-conmon-90b644033e728fad487ea2d4567371884b72433a45150d4a3f01fb090546912d.scope: Deactivated successfully.
Dec  3 19:04:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:04:59 compute-0 podman[455338]: 2025-12-03 19:04:59.55151406 +0000 UTC m=+0.051998860 container create 6bb08b3b842a79ee8e33844b21e94296300a5e0d4408c4a980ebe9f7390e075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lumiere, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 19:04:59 compute-0 systemd[1]: Started libpod-conmon-6bb08b3b842a79ee8e33844b21e94296300a5e0d4408c4a980ebe9f7390e075f.scope.
Dec  3 19:04:59 compute-0 ovn_controller[89305]: 2025-12-03T19:04:59Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:da:52 10.100.3.71
Dec  3 19:04:59 compute-0 ovn_controller[89305]: 2025-12-03T19:04:59Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:da:52 10.100.3.71
Dec  3 19:04:59 compute-0 podman[455338]: 2025-12-03 19:04:59.527196037 +0000 UTC m=+0.027680847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:04:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a75e90cee2483957b98ac18c5bea8be9ec8a6c9b713fd9d7fc8d1c37fe6727b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a75e90cee2483957b98ac18c5bea8be9ec8a6c9b713fd9d7fc8d1c37fe6727b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a75e90cee2483957b98ac18c5bea8be9ec8a6c9b713fd9d7fc8d1c37fe6727b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a75e90cee2483957b98ac18c5bea8be9ec8a6c9b713fd9d7fc8d1c37fe6727b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a75e90cee2483957b98ac18c5bea8be9ec8a6c9b713fd9d7fc8d1c37fe6727b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:04:59 compute-0 podman[455338]: 2025-12-03 19:04:59.677947606 +0000 UTC m=+0.178432436 container init 6bb08b3b842a79ee8e33844b21e94296300a5e0d4408c4a980ebe9f7390e075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lumiere, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:04:59 compute-0 podman[455338]: 2025-12-03 19:04:59.700767343 +0000 UTC m=+0.201252133 container start 6bb08b3b842a79ee8e33844b21e94296300a5e0d4408c4a980ebe9f7390e075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Dec  3 19:04:59 compute-0 podman[455338]: 2025-12-03 19:04:59.706286398 +0000 UTC m=+0.206771238 container attach 6bb08b3b842a79ee8e33844b21e94296300a5e0d4408c4a980ebe9f7390e075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lumiere, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:04:59 compute-0 podman[158200]: time="2025-12-03T19:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:04:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45517 "" "Go-http-client/1.1"
Dec  3 19:04:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9065 "" "Go-http-client/1.1"
Dec  3 19:04:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 208 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 624 KiB/s wr, 9 op/s
Dec  3 19:05:00 compute-0 focused_lumiere[455354]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:05:00 compute-0 focused_lumiere[455354]: --> relative data size: 1.0
Dec  3 19:05:00 compute-0 focused_lumiere[455354]: --> All data devices are unavailable
Dec  3 19:05:00 compute-0 systemd[1]: libpod-6bb08b3b842a79ee8e33844b21e94296300a5e0d4408c4a980ebe9f7390e075f.scope: Deactivated successfully.
Dec  3 19:05:00 compute-0 podman[455338]: 2025-12-03 19:05:00.89125005 +0000 UTC m=+1.391734850 container died 6bb08b3b842a79ee8e33844b21e94296300a5e0d4408c4a980ebe9f7390e075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lumiere, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 19:05:00 compute-0 systemd[1]: libpod-6bb08b3b842a79ee8e33844b21e94296300a5e0d4408c4a980ebe9f7390e075f.scope: Consumed 1.096s CPU time.
Dec  3 19:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a75e90cee2483957b98ac18c5bea8be9ec8a6c9b713fd9d7fc8d1c37fe6727b3-merged.mount: Deactivated successfully.
Dec  3 19:05:00 compute-0 podman[455338]: 2025-12-03 19:05:00.98384692 +0000 UTC m=+1.484331720 container remove 6bb08b3b842a79ee8e33844b21e94296300a5e0d4408c4a980ebe9f7390e075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lumiere, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 19:05:01 compute-0 systemd[1]: libpod-conmon-6bb08b3b842a79ee8e33844b21e94296300a5e0d4408c4a980ebe9f7390e075f.scope: Deactivated successfully.
Dec  3 19:05:01 compute-0 podman[455392]: 2025-12-03 19:05:01.08095963 +0000 UTC m=+0.117134839 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec  3 19:05:01 compute-0 podman[455403]: 2025-12-03 19:05:01.112712965 +0000 UTC m=+0.078744622 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 19:05:01 compute-0 podman[455402]: 2025-12-03 19:05:01.149412321 +0000 UTC m=+0.118360519 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, name=ubi9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 19:05:01 compute-0 openstack_network_exporter[365222]: ERROR   19:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:05:01 compute-0 openstack_network_exporter[365222]: ERROR   19:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:05:01 compute-0 openstack_network_exporter[365222]: ERROR   19:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:05:01 compute-0 openstack_network_exporter[365222]: ERROR   19:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:05:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:05:01 compute-0 openstack_network_exporter[365222]: ERROR   19:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:05:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:05:01 compute-0 podman[455584]: 2025-12-03 19:05:01.813434239 +0000 UTC m=+0.069593630 container create 9f6d585b2696b9985a292f5ad570a8a0bd5206f8746810d10f2e05879c95be59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:05:01 compute-0 nova_compute[348325]: 2025-12-03 19:05:01.857 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:01 compute-0 systemd[1]: Started libpod-conmon-9f6d585b2696b9985a292f5ad570a8a0bd5206f8746810d10f2e05879c95be59.scope.
Dec  3 19:05:01 compute-0 podman[455584]: 2025-12-03 19:05:01.789238818 +0000 UTC m=+0.045398249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:05:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:05:01 compute-0 podman[455584]: 2025-12-03 19:05:01.927365949 +0000 UTC m=+0.183525380 container init 9f6d585b2696b9985a292f5ad570a8a0bd5206f8746810d10f2e05879c95be59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 19:05:01 compute-0 podman[455584]: 2025-12-03 19:05:01.93598398 +0000 UTC m=+0.192143391 container start 9f6d585b2696b9985a292f5ad570a8a0bd5206f8746810d10f2e05879c95be59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moore, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 19:05:01 compute-0 podman[455584]: 2025-12-03 19:05:01.940845139 +0000 UTC m=+0.197004580 container attach 9f6d585b2696b9985a292f5ad570a8a0bd5206f8746810d10f2e05879c95be59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moore, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:05:01 compute-0 focused_moore[455597]: 167 167
Dec  3 19:05:01 compute-0 systemd[1]: libpod-9f6d585b2696b9985a292f5ad570a8a0bd5206f8746810d10f2e05879c95be59.scope: Deactivated successfully.
Dec  3 19:05:01 compute-0 podman[455584]: 2025-12-03 19:05:01.943937684 +0000 UTC m=+0.200097075 container died 9f6d585b2696b9985a292f5ad570a8a0bd5206f8746810d10f2e05879c95be59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moore, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 19:05:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 216 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 1.3 MiB/s wr, 37 op/s
Dec  3 19:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-806dba4c08f81bb65b8bd550a2187bf5a8afa090dbae097c935060158a15cd14-merged.mount: Deactivated successfully.
Dec  3 19:05:01 compute-0 podman[455584]: 2025-12-03 19:05:01.994707373 +0000 UTC m=+0.250866764 container remove 9f6d585b2696b9985a292f5ad570a8a0bd5206f8746810d10f2e05879c95be59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 19:05:02 compute-0 systemd[1]: libpod-conmon-9f6d585b2696b9985a292f5ad570a8a0bd5206f8746810d10f2e05879c95be59.scope: Deactivated successfully.
Dec  3 19:05:02 compute-0 podman[455621]: 2025-12-03 19:05:02.22565805 +0000 UTC m=+0.063157652 container create a31167ed5eb067a780b7f2a6b9c76e5f02214a602af0ba12cd1322a2cdec59b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_buck, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:05:02 compute-0 systemd[1]: Started libpod-conmon-a31167ed5eb067a780b7f2a6b9c76e5f02214a602af0ba12cd1322a2cdec59b3.scope.
Dec  3 19:05:02 compute-0 podman[455621]: 2025-12-03 19:05:02.194188572 +0000 UTC m=+0.031688204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:05:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bda060c39b6dbaa9262b1671a0409c5759adfd974053eeca9f4080ce960dddd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bda060c39b6dbaa9262b1671a0409c5759adfd974053eeca9f4080ce960dddd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bda060c39b6dbaa9262b1671a0409c5759adfd974053eeca9f4080ce960dddd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bda060c39b6dbaa9262b1671a0409c5759adfd974053eeca9f4080ce960dddd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:05:02 compute-0 podman[455621]: 2025-12-03 19:05:02.359352034 +0000 UTC m=+0.196851666 container init a31167ed5eb067a780b7f2a6b9c76e5f02214a602af0ba12cd1322a2cdec59b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_buck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:05:02 compute-0 podman[455621]: 2025-12-03 19:05:02.37191706 +0000 UTC m=+0.209416632 container start a31167ed5eb067a780b7f2a6b9c76e5f02214a602af0ba12cd1322a2cdec59b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_buck, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 19:05:02 compute-0 podman[455621]: 2025-12-03 19:05:02.375975259 +0000 UTC m=+0.213474851 container attach a31167ed5eb067a780b7f2a6b9c76e5f02214a602af0ba12cd1322a2cdec59b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]: {
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:    "0": [
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:        {
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "devices": [
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "/dev/loop3"
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            ],
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_name": "ceph_lv0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_size": "21470642176",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "name": "ceph_lv0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "tags": {
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.cluster_name": "ceph",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.crush_device_class": "",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.encrypted": "0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.osd_id": "0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.type": "block",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.vdo": "0"
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            },
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "type": "block",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "vg_name": "ceph_vg0"
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:        }
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:    ],
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:    "1": [
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:        {
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "devices": [
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "/dev/loop4"
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            ],
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_name": "ceph_lv1",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_size": "21470642176",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "name": "ceph_lv1",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "tags": {
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.cluster_name": "ceph",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.crush_device_class": "",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.encrypted": "0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.osd_id": "1",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.type": "block",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.vdo": "0"
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            },
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "type": "block",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "vg_name": "ceph_vg1"
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:        }
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:    ],
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:    "2": [
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:        {
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "devices": [
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "/dev/loop5"
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            ],
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_name": "ceph_lv2",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_size": "21470642176",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "name": "ceph_lv2",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "tags": {
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.cluster_name": "ceph",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.crush_device_class": "",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.encrypted": "0",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.osd_id": "2",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.type": "block",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:                "ceph.vdo": "0"
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            },
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "type": "block",
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:            "vg_name": "ceph_vg2"
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:        }
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]:    ]
Dec  3 19:05:03 compute-0 xenodochial_buck[455637]: }
Dec  3 19:05:03 compute-0 systemd[1]: libpod-a31167ed5eb067a780b7f2a6b9c76e5f02214a602af0ba12cd1322a2cdec59b3.scope: Deactivated successfully.
Dec  3 19:05:03 compute-0 podman[455621]: 2025-12-03 19:05:03.188829539 +0000 UTC m=+1.026329131 container died a31167ed5eb067a780b7f2a6b9c76e5f02214a602af0ba12cd1322a2cdec59b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec  3 19:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bda060c39b6dbaa9262b1671a0409c5759adfd974053eeca9f4080ce960dddd-merged.mount: Deactivated successfully.
Dec  3 19:05:03 compute-0 podman[455621]: 2025-12-03 19:05:03.270018871 +0000 UTC m=+1.107518473 container remove a31167ed5eb067a780b7f2a6b9c76e5f02214a602af0ba12cd1322a2cdec59b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_buck, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:05:03 compute-0 systemd[1]: libpod-conmon-a31167ed5eb067a780b7f2a6b9c76e5f02214a602af0ba12cd1322a2cdec59b3.scope: Deactivated successfully.
Dec  3 19:05:03 compute-0 nova_compute[348325]: 2025-12-03 19:05:03.602 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 230 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Dec  3 19:05:04 compute-0 podman[455794]: 2025-12-03 19:05:04.271554976 +0000 UTC m=+0.055141947 container create 6a6c563ecefc89d30b59d7a5a53255848436beb8f16ac1d783d05781824d890e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:05:04 compute-0 systemd[1]: Started libpod-conmon-6a6c563ecefc89d30b59d7a5a53255848436beb8f16ac1d783d05781824d890e.scope.
Dec  3 19:05:04 compute-0 podman[455794]: 2025-12-03 19:05:04.244634079 +0000 UTC m=+0.028221080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:05:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:05:04 compute-0 podman[455794]: 2025-12-03 19:05:04.378504577 +0000 UTC m=+0.162091548 container init 6a6c563ecefc89d30b59d7a5a53255848436beb8f16ac1d783d05781824d890e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:05:04 compute-0 podman[455794]: 2025-12-03 19:05:04.386639854 +0000 UTC m=+0.170226815 container start 6a6c563ecefc89d30b59d7a5a53255848436beb8f16ac1d783d05781824d890e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 19:05:04 compute-0 podman[455794]: 2025-12-03 19:05:04.391504274 +0000 UTC m=+0.175091235 container attach 6a6c563ecefc89d30b59d7a5a53255848436beb8f16ac1d783d05781824d890e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 19:05:04 compute-0 keen_saha[455810]: 167 167
Dec  3 19:05:04 compute-0 systemd[1]: libpod-6a6c563ecefc89d30b59d7a5a53255848436beb8f16ac1d783d05781824d890e.scope: Deactivated successfully.
Dec  3 19:05:04 compute-0 podman[455794]: 2025-12-03 19:05:04.396642079 +0000 UTC m=+0.180229030 container died 6a6c563ecefc89d30b59d7a5a53255848436beb8f16ac1d783d05781824d890e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 19:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9742b45b5c159613be26a63e60042d8fa7356705058af82e76dfbe14108842d4-merged.mount: Deactivated successfully.
Dec  3 19:05:04 compute-0 podman[455794]: 2025-12-03 19:05:04.453482926 +0000 UTC m=+0.237069867 container remove 6a6c563ecefc89d30b59d7a5a53255848436beb8f16ac1d783d05781824d890e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 19:05:04 compute-0 systemd[1]: libpod-conmon-6a6c563ecefc89d30b59d7a5a53255848436beb8f16ac1d783d05781824d890e.scope: Deactivated successfully.
Dec  3 19:05:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:04 compute-0 podman[455833]: 2025-12-03 19:05:04.654943934 +0000 UTC m=+0.056822689 container create f68107324fd3026fd236c42ae01bc1c86017fb826dd103d8b90cbc2b24a322c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec  3 19:05:04 compute-0 systemd[1]: Started libpod-conmon-f68107324fd3026fd236c42ae01bc1c86017fb826dd103d8b90cbc2b24a322c8.scope.
Dec  3 19:05:04 compute-0 podman[455833]: 2025-12-03 19:05:04.635996102 +0000 UTC m=+0.037874857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:05:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:05:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88b3bd77edbd987494cf9743e709655bb62db85f3a448fcd2b3532bef41caca2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:05:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88b3bd77edbd987494cf9743e709655bb62db85f3a448fcd2b3532bef41caca2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:05:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88b3bd77edbd987494cf9743e709655bb62db85f3a448fcd2b3532bef41caca2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:05:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88b3bd77edbd987494cf9743e709655bb62db85f3a448fcd2b3532bef41caca2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:05:04 compute-0 podman[455833]: 2025-12-03 19:05:04.787613162 +0000 UTC m=+0.189491957 container init f68107324fd3026fd236c42ae01bc1c86017fb826dd103d8b90cbc2b24a322c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:05:04 compute-0 podman[455833]: 2025-12-03 19:05:04.798925088 +0000 UTC m=+0.200803833 container start f68107324fd3026fd236c42ae01bc1c86017fb826dd103d8b90cbc2b24a322c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 19:05:04 compute-0 podman[455833]: 2025-12-03 19:05:04.803743595 +0000 UTC m=+0.205622380 container attach f68107324fd3026fd236c42ae01bc1c86017fb826dd103d8b90cbc2b24a322c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]: {
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "osd_id": 1,
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "type": "bluestore"
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:    },
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "osd_id": 2,
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "type": "bluestore"
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:    },
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "osd_id": 0,
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:        "type": "bluestore"
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]:    }
Dec  3 19:05:05 compute-0 jolly_dijkstra[455848]: }
Dec  3 19:05:05 compute-0 systemd[1]: libpod-f68107324fd3026fd236c42ae01bc1c86017fb826dd103d8b90cbc2b24a322c8.scope: Deactivated successfully.
Dec  3 19:05:05 compute-0 systemd[1]: libpod-f68107324fd3026fd236c42ae01bc1c86017fb826dd103d8b90cbc2b24a322c8.scope: Consumed 1.101s CPU time.
Dec  3 19:05:05 compute-0 podman[455833]: 2025-12-03 19:05:05.898160238 +0000 UTC m=+1.300039013 container died f68107324fd3026fd236c42ae01bc1c86017fb826dd103d8b90cbc2b24a322c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 19:05:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-88b3bd77edbd987494cf9743e709655bb62db85f3a448fcd2b3532bef41caca2-merged.mount: Deactivated successfully.
Dec  3 19:05:05 compute-0 podman[455833]: 2025-12-03 19:05:05.968085615 +0000 UTC m=+1.369964360 container remove f68107324fd3026fd236c42ae01bc1c86017fb826dd103d8b90cbc2b24a322c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:05:05 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec  3 19:05:05 compute-0 systemd[1]: libpod-conmon-f68107324fd3026fd236c42ae01bc1c86017fb826dd103d8b90cbc2b24a322c8.scope: Deactivated successfully.
Dec  3 19:05:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:05:06 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:05:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:05:06 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:05:06 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 019d7c6a-6b2f-45cb-a6a4-cbd283fdd62d does not exist
Dec  3 19:05:06 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b2baf6d4-9908-4da4-9987-8626a16a31c5 does not exist
Dec  3 19:05:06 compute-0 nova_compute[348325]: 2025-12-03 19:05:06.859 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:07 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:05:07 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:05:07 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec  3 19:05:08 compute-0 nova_compute[348325]: 2025-12-03 19:05:08.605 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:09 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec  3 19:05:11 compute-0 nova_compute[348325]: 2025-12-03 19:05:11.865 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:11 compute-0 podman[455944]: 2025-12-03 19:05:11.969484925 +0000 UTC m=+0.119550788 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 19:05:11 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 1.5 MiB/s wr, 51 op/s
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.255 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.256 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c375070>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.264 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a4fc45c7-44e4-4b50-a3e0-98de13268f88', 'name': 'te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.267 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a364994c-8442-4a4c-bd6b-f3a2d31e4483 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 19:05:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:13.269 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a364994c-8442-4a4c-bd6b-f3a2d31e4483 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}381125532ab0338283f553a8d9011c877e61445a70740cb69aa0e3ed00495f3c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 19:05:13 compute-0 nova_compute[348325]: 2025-12-03 19:05:13.608 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:13 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 834 KiB/s wr, 23 op/s
Dec  3 19:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:05:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:05:14
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.log', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes']
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.347 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Wed, 03 Dec 2025 19:05:13 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-fdceb64c-9d1d-4559-9d53-62c898b75954 x-openstack-request-id: req-fdceb64c-9d1d-4559-9d53-62c898b75954 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.347 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a364994c-8442-4a4c-bd6b-f3a2d31e4483", "name": "te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5", "status": "ACTIVE", "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "user_id": "5b5e6c2a7cce4e3b96611203def80123", "metadata": {"metering.server_group": "d721c97c-b9eb-44f9-a826-1b99239b172a"}, "hostId": "d87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be", "image": {"id": "29e9e995-880d-46f8-bdd0-149d4e107ea9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/29e9e995-880d-46f8-bdd0-149d4e107ea9"}]}, "flavor": {"id": "a94cfbfb-a20a-4689-ac91-e7436db75880", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a94cfbfb-a20a-4689-ac91-e7436db75880"}]}, "created": "2025-12-03T19:04:14Z", "updated": "2025-12-03T19:04:23Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.71", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:2c:da:52"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a364994c-8442-4a4c-bd6b-f3a2d31e4483"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a364994c-8442-4a4c-bd6b-f3a2d31e4483"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T19:04:23.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.348 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a364994c-8442-4a4c-bd6b-f3a2d31e4483 used request id req-fdceb64c-9d1d-4559-9d53-62c898b75954 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.356 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a364994c-8442-4a4c-bd6b-f3a2d31e4483', 'name': 'te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.357 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T19:05:14.358655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.367 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.374 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a364994c-8442-4a4c-bd6b-f3a2d31e4483 / tapb761f609-27 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.374 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.376 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T19:05:14.376924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.377 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.377 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.378 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.379 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.379 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.379 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T19:05:14.380327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.381 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.381 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.383 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.384 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.384 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.385 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.386 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T19:05:14.383675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T19:05:14.387016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.387 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.388 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5>]
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.389 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.389 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.390 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.390 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T19:05:14.390188) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.390 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.390 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.391 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.392 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.392 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.392 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.392 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.393 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.393 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T19:05:14.393189) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.393 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.410 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.410 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.427 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.427 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.427 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.428 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.428 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.428 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.428 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.428 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.429 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.429 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T19:05:14.428367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.429 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.429 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.429 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.429 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T19:05:14.429624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.448 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/memory.usage volume: 43.62890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.474 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/memory.usage volume: 43.62890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.475 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.475 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.476 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.476 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.476 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.476 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.477 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T19:05:14.476092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.478 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.478 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.478 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.478 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T19:05:14.477637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.478 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.479 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.479 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.479 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.479 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T19:05:14.479717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.515 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 29154304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.515 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.542 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.bytes volume: 30284800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.542 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.543 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.543 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.543 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5>]
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.544 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.544 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T19:05:14.543332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.544 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 1719418496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.544 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 125457767 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T19:05:14.544662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.545 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.latency volume: 2339550092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.545 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.latency volume: 154099871 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.545 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.546 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.546 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.546 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.546 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 1046 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T19:05:14.546186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.546 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.546 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.requests volume: 1098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.546 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.548 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.548 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.548 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T19:05:14.548713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.548 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.549 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.549 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.549 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.549 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.550 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T19:05:14.550189) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.550 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.550 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.550 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.bytes volume: 72781824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.550 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.551 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.551 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.551 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.551 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.551 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T19:05:14.551658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.551 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.552 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.552 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.552 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.552 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.552 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.552 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T19:05:14.552709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.552 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 8765791521 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.553 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.553 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.latency volume: 7159528835 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.553 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.554 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.554 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T19:05:14.554256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.554 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.554 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.554 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.requests volume: 264 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.555 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.555 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.555 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T19:05:14.555713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.555 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.556 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.556 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.556 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.556 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.556 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T19:05:14.556783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.557 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T19:05:14.557659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.557 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.558 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.558 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T19:05:14.558720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.559 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.559 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T19:05:14.559574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.559 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.559 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.560 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.560 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T19:05:14.560620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.560 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/cpu volume: 322300000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.561 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/cpu volume: 48110000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:05:14.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:05:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:05:15 compute-0 podman[455968]: 2025-12-03 19:05:15.967080286 +0000 UTC m=+0.108088489 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  3 19:05:15 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 15 KiB/s wr, 3 op/s
Dec  3 19:05:16 compute-0 podman[455967]: 2025-12-03 19:05:16.041077562 +0000 UTC m=+0.200277639 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:05:16 compute-0 nova_compute[348325]: 2025-12-03 19:05:16.508 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:05:16 compute-0 nova_compute[348325]: 2025-12-03 19:05:16.867 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:17 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Dec  3 19:05:18 compute-0 nova_compute[348325]: 2025-12-03 19:05:18.609 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:19 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec  3 19:05:21 compute-0 nova_compute[348325]: 2025-12-03 19:05:21.870 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:21 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 19:05:22 compute-0 nova_compute[348325]: 2025-12-03 19:05:22.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:05:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:05:23.362 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:05:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:05:23.363 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:05:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:05:23.364 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:05:23 compute-0 nova_compute[348325]: 2025-12-03 19:05:23.612 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:23 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.2 KiB/s wr, 2 op/s
Dec  3 19:05:24 compute-0 nova_compute[348325]: 2025-12-03 19:05:24.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:05:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001515557339486879 of space, bias 1.0, pg target 0.4546672018460637 quantized to 32 (current 32)
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:05:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:05:25 compute-0 nova_compute[348325]: 2025-12-03 19:05:25.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:05:25 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec  3 19:05:26 compute-0 nova_compute[348325]: 2025-12-03 19:05:26.874 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:27 compute-0 nova_compute[348325]: 2025-12-03 19:05:27.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:05:27 compute-0 podman[456010]: 2025-12-03 19:05:27.934579147 +0000 UTC m=+0.099534101 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 19:05:27 compute-0 podman[456012]: 2025-12-03 19:05:27.948760223 +0000 UTC m=+0.097668925 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, vendor=Red Hat, Inc.)
Dec  3 19:05:27 compute-0 podman[456011]: 2025-12-03 19:05:27.965601454 +0000 UTC m=+0.118335930 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 19:05:27 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.2 KiB/s wr, 5 op/s
Dec  3 19:05:28 compute-0 nova_compute[348325]: 2025-12-03 19:05:28.615 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:29 compute-0 nova_compute[348325]: 2025-12-03 19:05:29.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:05:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:29 compute-0 podman[158200]: time="2025-12-03T19:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:05:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:05:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8653 "" "Go-http-client/1.1"
Dec  3 19:05:29 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.2 KiB/s wr, 5 op/s
Dec  3 19:05:31 compute-0 openstack_network_exporter[365222]: ERROR   19:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:05:31 compute-0 openstack_network_exporter[365222]: ERROR   19:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:05:31 compute-0 openstack_network_exporter[365222]: ERROR   19:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:05:31 compute-0 openstack_network_exporter[365222]: ERROR   19:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:05:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:05:31 compute-0 openstack_network_exporter[365222]: ERROR   19:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:05:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:05:31 compute-0 nova_compute[348325]: 2025-12-03 19:05:31.877 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:31 compute-0 podman[456072]: 2025-12-03 19:05:31.954806231 +0000 UTC m=+0.115434248 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_id=edpm, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 19:05:31 compute-0 podman[456073]: 2025-12-03 19:05:31.958805249 +0000 UTC m=+0.112388604 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 19:05:31 compute-0 podman[456074]: 2025-12-03 19:05:31.975040605 +0000 UTC m=+0.124909250 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:05:31 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.2 KiB/s wr, 5 op/s
Dec  3 19:05:32 compute-0 nova_compute[348325]: 2025-12-03 19:05:32.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:05:32 compute-0 nova_compute[348325]: 2025-12-03 19:05:32.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:05:33 compute-0 nova_compute[348325]: 2025-12-03 19:05:33.269 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:05:33 compute-0 nova_compute[348325]: 2025-12-03 19:05:33.269 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:05:33 compute-0 nova_compute[348325]: 2025-12-03 19:05:33.269 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:05:33 compute-0 nova_compute[348325]: 2025-12-03 19:05:33.620 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:33 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 9.2 KiB/s wr, 5 op/s
Dec  3 19:05:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:35 compute-0 nova_compute[348325]: 2025-12-03 19:05:35.328 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updating instance_info_cache with network_info: [{"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:05:35 compute-0 nova_compute[348325]: 2025-12-03 19:05:35.349 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:05:35 compute-0 nova_compute[348325]: 2025-12-03 19:05:35.350 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:05:35 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 9.1 KiB/s wr, 3 op/s
Dec  3 19:05:36 compute-0 nova_compute[348325]: 2025-12-03 19:05:36.881 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:37 compute-0 nova_compute[348325]: 2025-12-03 19:05:37.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:05:37 compute-0 nova_compute[348325]: 2025-12-03 19:05:37.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:05:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:05:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/969102140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:05:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:05:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/969102140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:05:37 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 9.4 KiB/s wr, 2 op/s
Dec  3 19:05:38 compute-0 nova_compute[348325]: 2025-12-03 19:05:38.621 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:39 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 9.4 KiB/s wr, 0 op/s
Dec  3 19:05:41 compute-0 nova_compute[348325]: 2025-12-03 19:05:41.885 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 9.4 KiB/s wr, 0 op/s
Dec  3 19:05:42 compute-0 podman[456126]: 2025-12-03 19:05:42.949338613 +0000 UTC m=+0.100220427 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 19:05:43 compute-0 nova_compute[348325]: 2025-12-03 19:05:43.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:05:43 compute-0 nova_compute[348325]: 2025-12-03 19:05:43.624 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:43 compute-0 nova_compute[348325]: 2025-12-03 19:05:43.672 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:05:43 compute-0 nova_compute[348325]: 2025-12-03 19:05:43.672 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:05:43 compute-0 nova_compute[348325]: 2025-12-03 19:05:43.672 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:05:43 compute-0 nova_compute[348325]: 2025-12-03 19:05:43.672 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:05:43 compute-0 nova_compute[348325]: 2025-12-03 19:05:43.673 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:05:43 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec  3 19:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:05:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:05:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:05:44 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254958490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.187 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.295 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.296 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.308 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.308 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:05:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.702 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.706 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3571MB free_disk=59.89727020263672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.707 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.708 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.830 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.831 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a364994c-8442-4a4c-bd6b-f3a2d31e4483 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.831 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.832 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:05:44 compute-0 nova_compute[348325]: 2025-12-03 19:05:44.889 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:05:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:05:45 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4130148613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:05:45 compute-0 nova_compute[348325]: 2025-12-03 19:05:45.349 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:05:45 compute-0 nova_compute[348325]: 2025-12-03 19:05:45.358 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:05:45 compute-0 nova_compute[348325]: 2025-12-03 19:05:45.385 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:05:45 compute-0 nova_compute[348325]: 2025-12-03 19:05:45.386 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:05:45 compute-0 nova_compute[348325]: 2025-12-03 19:05:45.387 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:05:45 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 341 B/s wr, 0 op/s
Dec  3 19:05:46 compute-0 nova_compute[348325]: 2025-12-03 19:05:46.889 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:47 compute-0 podman[456194]: 2025-12-03 19:05:47.003096607 +0000 UTC m=+0.148151427 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 19:05:47 compute-0 podman[456193]: 2025-12-03 19:05:47.009888663 +0000 UTC m=+0.160880688 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  3 19:05:47 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.7 KiB/s wr, 0 op/s
Dec  3 19:05:48 compute-0 nova_compute[348325]: 2025-12-03 19:05:48.628 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:49 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 19:05:51 compute-0 nova_compute[348325]: 2025-12-03 19:05:51.893 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:51 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 19:05:53 compute-0 nova_compute[348325]: 2025-12-03 19:05:53.630 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:53 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 19:05:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:55 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec  3 19:05:56 compute-0 nova_compute[348325]: 2025-12-03 19:05:56.898 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:57 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec  3 19:05:58 compute-0 nova_compute[348325]: 2025-12-03 19:05:58.633 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:05:58 compute-0 podman[456241]: 2025-12-03 19:05:58.97204597 +0000 UTC m=+0.106016359 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 19:05:59 compute-0 podman[456242]: 2025-12-03 19:05:59.013338739 +0000 UTC m=+0.138266436 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 19:05:59 compute-0 podman[456240]: 2025-12-03 19:05:59.019592631 +0000 UTC m=+0.167750006 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Dec  3 19:05:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:05:59 compute-0 podman[158200]: time="2025-12-03T19:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:05:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:05:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Dec  3 19:05:59 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:01 compute-0 openstack_network_exporter[365222]: ERROR   19:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:06:01 compute-0 openstack_network_exporter[365222]: ERROR   19:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:06:01 compute-0 openstack_network_exporter[365222]: ERROR   19:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:06:01 compute-0 openstack_network_exporter[365222]: ERROR   19:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:06:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:06:01 compute-0 openstack_network_exporter[365222]: ERROR   19:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:06:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:06:01 compute-0 nova_compute[348325]: 2025-12-03 19:06:01.900 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:01 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:02 compute-0 podman[456300]: 2025-12-03 19:06:02.932988108 +0000 UTC m=+0.091209167 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3)
Dec  3 19:06:02 compute-0 podman[456301]: 2025-12-03 19:06:02.934383312 +0000 UTC m=+0.082812402 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:06:02 compute-0 podman[456299]: 2025-12-03 19:06:02.955394095 +0000 UTC m=+0.108768826 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., architecture=x86_64, release-0.7.12=, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, release=1214.1726694543, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec  3 19:06:03 compute-0 nova_compute[348325]: 2025-12-03 19:06:03.637 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:03 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec  3 19:06:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Dec  3 19:06:06 compute-0 nova_compute[348325]: 2025-12-03 19:06:06.903 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 13 KiB/s wr, 4 op/s
Dec  3 19:06:08 compute-0 podman[456624]: 2025-12-03 19:06:08.333921932 +0000 UTC m=+0.099041357 container create afcff0b73f81f613b588e4405d999faaef0eabec9782c8897861a063361338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:06:08 compute-0 podman[456624]: 2025-12-03 19:06:08.287867059 +0000 UTC m=+0.052986514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:06:08 compute-0 systemd[1]: Started libpod-conmon-afcff0b73f81f613b588e4405d999faaef0eabec9782c8897861a063361338f1.scope.
Dec  3 19:06:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:06:08 compute-0 podman[456624]: 2025-12-03 19:06:08.477271012 +0000 UTC m=+0.242390437 container init afcff0b73f81f613b588e4405d999faaef0eabec9782c8897861a063361338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 19:06:08 compute-0 podman[456624]: 2025-12-03 19:06:08.490799333 +0000 UTC m=+0.255918738 container start afcff0b73f81f613b588e4405d999faaef0eabec9782c8897861a063361338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 19:06:08 compute-0 podman[456624]: 2025-12-03 19:06:08.495689441 +0000 UTC m=+0.260808846 container attach afcff0b73f81f613b588e4405d999faaef0eabec9782c8897861a063361338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:06:08 compute-0 pensive_mclaren[456640]: 167 167
Dec  3 19:06:08 compute-0 systemd[1]: libpod-afcff0b73f81f613b588e4405d999faaef0eabec9782c8897861a063361338f1.scope: Deactivated successfully.
Dec  3 19:06:08 compute-0 conmon[456640]: conmon afcff0b73f81f613b588 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-afcff0b73f81f613b588e4405d999faaef0eabec9782c8897861a063361338f1.scope/container/memory.events
Dec  3 19:06:08 compute-0 podman[456624]: 2025-12-03 19:06:08.506214739 +0000 UTC m=+0.271334184 container died afcff0b73f81f613b588e4405d999faaef0eabec9782c8897861a063361338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 19:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-06623684ac08de75c535fe9835711673cfc3deaf7d75fbfd5b849e1fdda381a3-merged.mount: Deactivated successfully.
Dec  3 19:06:08 compute-0 podman[456624]: 2025-12-03 19:06:08.581790353 +0000 UTC m=+0.346909768 container remove afcff0b73f81f613b588e4405d999faaef0eabec9782c8897861a063361338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:06:08 compute-0 systemd[1]: libpod-conmon-afcff0b73f81f613b588e4405d999faaef0eabec9782c8897861a063361338f1.scope: Deactivated successfully.
Dec  3 19:06:08 compute-0 nova_compute[348325]: 2025-12-03 19:06:08.639 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:08 compute-0 podman[456665]: 2025-12-03 19:06:08.851315562 +0000 UTC m=+0.073199788 container create 05d77dbd56776ea8a0e20909458a0d8548bf361251a66642567eb5abd7b952a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 19:06:08 compute-0 systemd[1]: Started libpod-conmon-05d77dbd56776ea8a0e20909458a0d8548bf361251a66642567eb5abd7b952a8.scope.
Dec  3 19:06:08 compute-0 podman[456665]: 2025-12-03 19:06:08.828432883 +0000 UTC m=+0.050317159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:06:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49dd71573658b3350ff0a6c863adff2dac57b8b2be0c14e3be51d3aeb38f7155/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49dd71573658b3350ff0a6c863adff2dac57b8b2be0c14e3be51d3aeb38f7155/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49dd71573658b3350ff0a6c863adff2dac57b8b2be0c14e3be51d3aeb38f7155/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49dd71573658b3350ff0a6c863adff2dac57b8b2be0c14e3be51d3aeb38f7155/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:08 compute-0 podman[456665]: 2025-12-03 19:06:08.998947154 +0000 UTC m=+0.220831450 container init 05d77dbd56776ea8a0e20909458a0d8548bf361251a66642567eb5abd7b952a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:06:09 compute-0 podman[456665]: 2025-12-03 19:06:09.010038885 +0000 UTC m=+0.231923111 container start 05d77dbd56776ea8a0e20909458a0d8548bf361251a66642567eb5abd7b952a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 19:06:09 compute-0 podman[456665]: 2025-12-03 19:06:09.015226372 +0000 UTC m=+0.237110608 container attach 05d77dbd56776ea8a0e20909458a0d8548bf361251a66642567eb5abd7b952a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:06:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 13 KiB/s wr, 13 op/s
Dec  3 19:06:11 compute-0 nova_compute[348325]: 2025-12-03 19:06:11.909 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 13 KiB/s wr, 32 op/s
Dec  3 19:06:12 compute-0 nifty_moore[456679]: [
Dec  3 19:06:12 compute-0 nifty_moore[456679]:    {
Dec  3 19:06:12 compute-0 nifty_moore[456679]:        "available": false,
Dec  3 19:06:12 compute-0 nifty_moore[456679]:        "ceph_device": false,
Dec  3 19:06:12 compute-0 nifty_moore[456679]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:        "lsm_data": {},
Dec  3 19:06:12 compute-0 nifty_moore[456679]:        "lvs": [],
Dec  3 19:06:12 compute-0 nifty_moore[456679]:        "path": "/dev/sr0",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:        "rejected_reasons": [
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "Has a FileSystem",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "Insufficient space (<5GB)"
Dec  3 19:06:12 compute-0 nifty_moore[456679]:        ],
Dec  3 19:06:12 compute-0 nifty_moore[456679]:        "sys_api": {
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "actuators": null,
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "device_nodes": "sr0",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "devname": "sr0",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "human_readable_size": "482.00 KB",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "id_bus": "ata",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "model": "QEMU DVD-ROM",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "nr_requests": "2",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "parent": "/dev/sr0",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "partitions": {},
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "path": "/dev/sr0",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "removable": "1",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "rev": "2.5+",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "ro": "0",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "rotational": "1",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "sas_address": "",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "sas_device_handle": "",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "scheduler_mode": "mq-deadline",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "sectors": 0,
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "sectorsize": "2048",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "size": 493568.0,
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "support_discard": "2048",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "type": "disk",
Dec  3 19:06:12 compute-0 nifty_moore[456679]:            "vendor": "QEMU"
Dec  3 19:06:12 compute-0 nifty_moore[456679]:        }
Dec  3 19:06:12 compute-0 nifty_moore[456679]:    }
Dec  3 19:06:12 compute-0 nifty_moore[456679]: ]
Dec  3 19:06:12 compute-0 systemd[1]: libpod-05d77dbd56776ea8a0e20909458a0d8548bf361251a66642567eb5abd7b952a8.scope: Deactivated successfully.
Dec  3 19:06:12 compute-0 systemd[1]: libpod-05d77dbd56776ea8a0e20909458a0d8548bf361251a66642567eb5abd7b952a8.scope: Consumed 2.934s CPU time.
Dec  3 19:06:12 compute-0 podman[459167]: 2025-12-03 19:06:12.251236446 +0000 UTC m=+0.048046634 container died 05d77dbd56776ea8a0e20909458a0d8548bf361251a66642567eb5abd7b952a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-49dd71573658b3350ff0a6c863adff2dac57b8b2be0c14e3be51d3aeb38f7155-merged.mount: Deactivated successfully.
Dec  3 19:06:12 compute-0 podman[459167]: 2025-12-03 19:06:12.337263826 +0000 UTC m=+0.134073984 container remove 05d77dbd56776ea8a0e20909458a0d8548bf361251a66642567eb5abd7b952a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:06:12 compute-0 systemd[1]: libpod-conmon-05d77dbd56776ea8a0e20909458a0d8548bf361251a66642567eb5abd7b952a8.scope: Deactivated successfully.
Dec  3 19:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:06:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 57f09c21-e037-4fd0-895e-e1bef08e596f does not exist
Dec  3 19:06:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 5b6583e2-07b7-4b16-a657-a8e3b471de48 does not exist
Dec  3 19:06:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev fbf8b1e9-d155-4f73-bd24-08414326a2e1 does not exist
Dec  3 19:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:06:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:06:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:06:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:06:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:06:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:06:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:06:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:06:13 compute-0 podman[459321]: 2025-12-03 19:06:13.482797256 +0000 UTC m=+0.075928324 container create 6f9e00fffed3d6be852eb81e72105930aedfb6284c3a001860d30c4a943291c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 19:06:13 compute-0 podman[459321]: 2025-12-03 19:06:13.44405274 +0000 UTC m=+0.037183798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:06:13 compute-0 systemd[1]: Started libpod-conmon-6f9e00fffed3d6be852eb81e72105930aedfb6284c3a001860d30c4a943291c3.scope.
Dec  3 19:06:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:06:13 compute-0 podman[459321]: 2025-12-03 19:06:13.618832506 +0000 UTC m=+0.211963584 container init 6f9e00fffed3d6be852eb81e72105930aedfb6284c3a001860d30c4a943291c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 19:06:13 compute-0 podman[459321]: 2025-12-03 19:06:13.628960793 +0000 UTC m=+0.222091861 container start 6f9e00fffed3d6be852eb81e72105930aedfb6284c3a001860d30c4a943291c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:06:13 compute-0 objective_banzai[459338]: 167 167
Dec  3 19:06:13 compute-0 systemd[1]: libpod-6f9e00fffed3d6be852eb81e72105930aedfb6284c3a001860d30c4a943291c3.scope: Deactivated successfully.
Dec  3 19:06:13 compute-0 podman[459321]: 2025-12-03 19:06:13.635264017 +0000 UTC m=+0.228395095 container attach 6f9e00fffed3d6be852eb81e72105930aedfb6284c3a001860d30c4a943291c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 19:06:13 compute-0 podman[459321]: 2025-12-03 19:06:13.643052327 +0000 UTC m=+0.236183385 container died 6f9e00fffed3d6be852eb81e72105930aedfb6284c3a001860d30c4a943291c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:06:13 compute-0 nova_compute[348325]: 2025-12-03 19:06:13.642 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cf6527da99057de749a3272966c251a93e3a33cb31f71169f8d8db5a57952d4-merged.mount: Deactivated successfully.
Dec  3 19:06:13 compute-0 podman[459337]: 2025-12-03 19:06:13.694682237 +0000 UTC m=+0.139829724 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 19:06:13 compute-0 podman[459321]: 2025-12-03 19:06:13.711060117 +0000 UTC m=+0.304191175 container remove 6f9e00fffed3d6be852eb81e72105930aedfb6284c3a001860d30c4a943291c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 19:06:13 compute-0 systemd[1]: libpod-conmon-6f9e00fffed3d6be852eb81e72105930aedfb6284c3a001860d30c4a943291c3.scope: Deactivated successfully.
Dec  3 19:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:06:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 52 op/s
Dec  3 19:06:14 compute-0 podman[459383]: 2025-12-03 19:06:14.013904459 +0000 UTC m=+0.099402258 container create 252241a89e33e2db795bfc485458961dad4521c96137f19157e97a8e598f7db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:06:14
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['images', 'volumes', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr']
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:06:14 compute-0 podman[459383]: 2025-12-03 19:06:13.986009018 +0000 UTC m=+0.071506867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:06:14 compute-0 systemd[1]: Started libpod-conmon-252241a89e33e2db795bfc485458961dad4521c96137f19157e97a8e598f7db5.scope.
Dec  3 19:06:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6cc8d51e708e22a480006c02e6e799f1613fe140f6f5f5a41ed3efa0d7d7dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6cc8d51e708e22a480006c02e6e799f1613fe140f6f5f5a41ed3efa0d7d7dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6cc8d51e708e22a480006c02e6e799f1613fe140f6f5f5a41ed3efa0d7d7dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6cc8d51e708e22a480006c02e6e799f1613fe140f6f5f5a41ed3efa0d7d7dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6cc8d51e708e22a480006c02e6e799f1613fe140f6f5f5a41ed3efa0d7d7dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:14 compute-0 podman[459383]: 2025-12-03 19:06:14.184261947 +0000 UTC m=+0.269759756 container init 252241a89e33e2db795bfc485458961dad4521c96137f19157e97a8e598f7db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 19:06:14 compute-0 podman[459383]: 2025-12-03 19:06:14.207395241 +0000 UTC m=+0.292893080 container start 252241a89e33e2db795bfc485458961dad4521c96137f19157e97a8e598f7db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:06:14 compute-0 podman[459383]: 2025-12-03 19:06:14.218629535 +0000 UTC m=+0.304127384 container attach 252241a89e33e2db795bfc485458961dad4521c96137f19157e97a8e598f7db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:06:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:06:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:06:15 compute-0 nervous_torvalds[459399]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:06:15 compute-0 nervous_torvalds[459399]: --> relative data size: 1.0
Dec  3 19:06:15 compute-0 nervous_torvalds[459399]: --> All data devices are unavailable
Dec  3 19:06:15 compute-0 systemd[1]: libpod-252241a89e33e2db795bfc485458961dad4521c96137f19157e97a8e598f7db5.scope: Deactivated successfully.
Dec  3 19:06:15 compute-0 systemd[1]: libpod-252241a89e33e2db795bfc485458961dad4521c96137f19157e97a8e598f7db5.scope: Consumed 1.315s CPU time.
Dec  3 19:06:15 compute-0 podman[459428]: 2025-12-03 19:06:15.677151465 +0000 UTC m=+0.046611739 container died 252241a89e33e2db795bfc485458961dad4521c96137f19157e97a8e598f7db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d6cc8d51e708e22a480006c02e6e799f1613fe140f6f5f5a41ed3efa0d7d7dd-merged.mount: Deactivated successfully.
Dec  3 19:06:15 compute-0 podman[459428]: 2025-12-03 19:06:15.803815577 +0000 UTC m=+0.173275751 container remove 252241a89e33e2db795bfc485458961dad4521c96137f19157e97a8e598f7db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:06:15 compute-0 systemd[1]: libpod-conmon-252241a89e33e2db795bfc485458961dad4521c96137f19157e97a8e598f7db5.scope: Deactivated successfully.
Dec  3 19:06:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 9.0 KiB/s wr, 70 op/s
Dec  3 19:06:16 compute-0 podman[459583]: 2025-12-03 19:06:16.865319026 +0000 UTC m=+0.076939409 container create 6009e30d0531bc0005187985ea8ccaa16f4c2a29f8b9d02e9a12b9c03f4aea1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 19:06:16 compute-0 nova_compute[348325]: 2025-12-03 19:06:16.914 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:16 compute-0 systemd[1]: Started libpod-conmon-6009e30d0531bc0005187985ea8ccaa16f4c2a29f8b9d02e9a12b9c03f4aea1c.scope.
Dec  3 19:06:16 compute-0 podman[459583]: 2025-12-03 19:06:16.841950535 +0000 UTC m=+0.053570898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:06:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:06:16 compute-0 podman[459583]: 2025-12-03 19:06:16.993944425 +0000 UTC m=+0.205564818 container init 6009e30d0531bc0005187985ea8ccaa16f4c2a29f8b9d02e9a12b9c03f4aea1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamarr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:06:17 compute-0 podman[459583]: 2025-12-03 19:06:17.005009506 +0000 UTC m=+0.216629859 container start 6009e30d0531bc0005187985ea8ccaa16f4c2a29f8b9d02e9a12b9c03f4aea1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamarr, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 19:06:17 compute-0 podman[459583]: 2025-12-03 19:06:17.010325624 +0000 UTC m=+0.221946017 container attach 6009e30d0531bc0005187985ea8ccaa16f4c2a29f8b9d02e9a12b9c03f4aea1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamarr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:06:17 compute-0 jovial_lamarr[459596]: 167 167
Dec  3 19:06:17 compute-0 systemd[1]: libpod-6009e30d0531bc0005187985ea8ccaa16f4c2a29f8b9d02e9a12b9c03f4aea1c.scope: Deactivated successfully.
Dec  3 19:06:17 compute-0 podman[459583]: 2025-12-03 19:06:17.017123701 +0000 UTC m=+0.228744074 container died 6009e30d0531bc0005187985ea8ccaa16f4c2a29f8b9d02e9a12b9c03f4aea1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4855a3d79f17c58649fb5295dabf51e7fe31c32d7a2e265b7b274ef453b8779f-merged.mount: Deactivated successfully.
Dec  3 19:06:17 compute-0 podman[459583]: 2025-12-03 19:06:17.090942193 +0000 UTC m=+0.302562536 container remove 6009e30d0531bc0005187985ea8ccaa16f4c2a29f8b9d02e9a12b9c03f4aea1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 19:06:17 compute-0 systemd[1]: libpod-conmon-6009e30d0531bc0005187985ea8ccaa16f4c2a29f8b9d02e9a12b9c03f4aea1c.scope: Deactivated successfully.
Dec  3 19:06:17 compute-0 podman[459617]: 2025-12-03 19:06:17.206607175 +0000 UTC m=+0.105032625 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Dec  3 19:06:17 compute-0 podman[459616]: 2025-12-03 19:06:17.249671596 +0000 UTC m=+0.151136640 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 19:06:17 compute-0 podman[459665]: 2025-12-03 19:06:17.387805598 +0000 UTC m=+0.103785124 container create 82c7648b7963425ca98327af4f5f71cf6ad37b7e75890605caae92462bbd4cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 19:06:17 compute-0 podman[459665]: 2025-12-03 19:06:17.356302319 +0000 UTC m=+0.072281885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:06:17 compute-0 systemd[1]: Started libpod-conmon-82c7648b7963425ca98327af4f5f71cf6ad37b7e75890605caae92462bbd4cec.scope.
Dec  3 19:06:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb0ff954547fb0019d0081511e5d7436868aa3236a26a2d599604c5cd003154/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb0ff954547fb0019d0081511e5d7436868aa3236a26a2d599604c5cd003154/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb0ff954547fb0019d0081511e5d7436868aa3236a26a2d599604c5cd003154/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb0ff954547fb0019d0081511e5d7436868aa3236a26a2d599604c5cd003154/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:17 compute-0 podman[459665]: 2025-12-03 19:06:17.546057331 +0000 UTC m=+0.262036887 container init 82c7648b7963425ca98327af4f5f71cf6ad37b7e75890605caae92462bbd4cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:06:17 compute-0 podman[459665]: 2025-12-03 19:06:17.562682647 +0000 UTC m=+0.278662183 container start 82c7648b7963425ca98327af4f5f71cf6ad37b7e75890605caae92462bbd4cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:06:17 compute-0 podman[459665]: 2025-12-03 19:06:17.56776773 +0000 UTC m=+0.283747446 container attach 82c7648b7963425ca98327af4f5f71cf6ad37b7e75890605caae92462bbd4cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 19:06:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.0 KiB/s wr, 73 op/s
Dec  3 19:06:18 compute-0 cool_bouman[459681]: {
Dec  3 19:06:18 compute-0 cool_bouman[459681]:    "0": [
Dec  3 19:06:18 compute-0 cool_bouman[459681]:        {
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "devices": [
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "/dev/loop3"
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            ],
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_name": "ceph_lv0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_size": "21470642176",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "name": "ceph_lv0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "tags": {
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.cluster_name": "ceph",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.crush_device_class": "",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.encrypted": "0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.osd_id": "0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.type": "block",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.vdo": "0"
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            },
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "type": "block",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "vg_name": "ceph_vg0"
Dec  3 19:06:18 compute-0 cool_bouman[459681]:        }
Dec  3 19:06:18 compute-0 cool_bouman[459681]:    ],
Dec  3 19:06:18 compute-0 cool_bouman[459681]:    "1": [
Dec  3 19:06:18 compute-0 cool_bouman[459681]:        {
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "devices": [
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "/dev/loop4"
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            ],
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_name": "ceph_lv1",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_size": "21470642176",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "name": "ceph_lv1",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "tags": {
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.cluster_name": "ceph",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.crush_device_class": "",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.encrypted": "0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.osd_id": "1",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.type": "block",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.vdo": "0"
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            },
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "type": "block",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "vg_name": "ceph_vg1"
Dec  3 19:06:18 compute-0 cool_bouman[459681]:        }
Dec  3 19:06:18 compute-0 cool_bouman[459681]:    ],
Dec  3 19:06:18 compute-0 cool_bouman[459681]:    "2": [
Dec  3 19:06:18 compute-0 cool_bouman[459681]:        {
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "devices": [
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "/dev/loop5"
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            ],
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_name": "ceph_lv2",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_size": "21470642176",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "name": "ceph_lv2",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "tags": {
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.cluster_name": "ceph",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.crush_device_class": "",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.encrypted": "0",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.osd_id": "2",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.type": "block",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:                "ceph.vdo": "0"
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            },
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "type": "block",
Dec  3 19:06:18 compute-0 cool_bouman[459681]:            "vg_name": "ceph_vg2"
Dec  3 19:06:18 compute-0 cool_bouman[459681]:        }
Dec  3 19:06:18 compute-0 cool_bouman[459681]:    ]
Dec  3 19:06:18 compute-0 cool_bouman[459681]: }
Dec  3 19:06:18 compute-0 systemd[1]: libpod-82c7648b7963425ca98327af4f5f71cf6ad37b7e75890605caae92462bbd4cec.scope: Deactivated successfully.
Dec  3 19:06:18 compute-0 podman[459665]: 2025-12-03 19:06:18.365130363 +0000 UTC m=+1.081109939 container died 82c7648b7963425ca98327af4f5f71cf6ad37b7e75890605caae92462bbd4cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 19:06:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bb0ff954547fb0019d0081511e5d7436868aa3236a26a2d599604c5cd003154-merged.mount: Deactivated successfully.
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.471117) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788778471162, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2057, "num_deletes": 251, "total_data_size": 3453831, "memory_usage": 3500832, "flush_reason": "Manual Compaction"}
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Dec  3 19:06:18 compute-0 podman[459665]: 2025-12-03 19:06:18.475191169 +0000 UTC m=+1.191170705 container remove 82c7648b7963425ca98327af4f5f71cf6ad37b7e75890605caae92462bbd4cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:06:18 compute-0 systemd[1]: libpod-conmon-82c7648b7963425ca98327af4f5f71cf6ad37b7e75890605caae92462bbd4cec.scope: Deactivated successfully.
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788778490483, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 3376518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39664, "largest_seqno": 41720, "table_properties": {"data_size": 3367125, "index_size": 5951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18834, "raw_average_key_size": 20, "raw_value_size": 3348529, "raw_average_value_size": 3577, "num_data_blocks": 265, "num_entries": 936, "num_filter_entries": 936, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764788552, "oldest_key_time": 1764788552, "file_creation_time": 1764788778, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 19423 microseconds, and 7364 cpu microseconds.
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.490539) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 3376518 bytes OK
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.490557) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.495494) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.495512) EVENT_LOG_v1 {"time_micros": 1764788778495506, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.495529) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3445205, prev total WAL file size 3445205, number of live WAL files 2.
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.496904) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(3297KB)], [95(6133KB)]
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788778497096, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 9656979, "oldest_snapshot_seqno": -1}
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 5795 keys, 7972420 bytes, temperature: kUnknown
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788778568117, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7972420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7935489, "index_size": 21333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 150174, "raw_average_key_size": 25, "raw_value_size": 7832477, "raw_average_value_size": 1351, "num_data_blocks": 849, "num_entries": 5795, "num_filter_entries": 5795, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764788778, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.568702) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7972420 bytes
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.571727) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.6 rd, 111.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(5.2) write-amplify(2.4) OK, records in: 6309, records dropped: 514 output_compression: NoCompression
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.571760) EVENT_LOG_v1 {"time_micros": 1764788778571745, "job": 56, "event": "compaction_finished", "compaction_time_micros": 71239, "compaction_time_cpu_micros": 32029, "output_level": 6, "num_output_files": 1, "total_output_size": 7972420, "num_input_records": 6309, "num_output_records": 5795, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788778573093, "job": 56, "event": "table_file_deletion", "file_number": 97}
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788778578211, "job": 56, "event": "table_file_deletion", "file_number": 95}
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.496562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.578385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.578392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.578394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.578395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:06:18 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:06:18.578397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:06:18 compute-0 nova_compute[348325]: 2025-12-03 19:06:18.646 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:19 compute-0 podman[459839]: 2025-12-03 19:06:19.582557527 +0000 UTC m=+0.055391443 container create def5d7a951125ac0b2558274191c5dcb2779ecd8940a8a6f70e1b88fbea99542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_satoshi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 19:06:19 compute-0 systemd[1]: Started libpod-conmon-def5d7a951125ac0b2558274191c5dcb2779ecd8940a8a6f70e1b88fbea99542.scope.
Dec  3 19:06:19 compute-0 podman[459839]: 2025-12-03 19:06:19.559072824 +0000 UTC m=+0.031906740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:06:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:06:19 compute-0 podman[459839]: 2025-12-03 19:06:19.708314856 +0000 UTC m=+0.181148852 container init def5d7a951125ac0b2558274191c5dcb2779ecd8940a8a6f70e1b88fbea99542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:06:19 compute-0 podman[459839]: 2025-12-03 19:06:19.719155021 +0000 UTC m=+0.191988927 container start def5d7a951125ac0b2558274191c5dcb2779ecd8940a8a6f70e1b88fbea99542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:06:19 compute-0 podman[459839]: 2025-12-03 19:06:19.725227069 +0000 UTC m=+0.198061005 container attach def5d7a951125ac0b2558274191c5dcb2779ecd8940a8a6f70e1b88fbea99542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_satoshi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:06:19 compute-0 romantic_satoshi[459855]: 167 167
Dec  3 19:06:19 compute-0 systemd[1]: libpod-def5d7a951125ac0b2558274191c5dcb2779ecd8940a8a6f70e1b88fbea99542.scope: Deactivated successfully.
Dec  3 19:06:19 compute-0 podman[459839]: 2025-12-03 19:06:19.728208922 +0000 UTC m=+0.201042858 container died def5d7a951125ac0b2558274191c5dcb2779ecd8940a8a6f70e1b88fbea99542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_satoshi, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:06:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-398b5a4b9da7b7a680b260bbb170522a88a4c0bdf1528ca9396e91ea410feeb0-merged.mount: Deactivated successfully.
Dec  3 19:06:19 compute-0 podman[459839]: 2025-12-03 19:06:19.792701817 +0000 UTC m=+0.265535723 container remove def5d7a951125ac0b2558274191c5dcb2779ecd8940a8a6f70e1b88fbea99542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:06:19 compute-0 systemd[1]: libpod-conmon-def5d7a951125ac0b2558274191c5dcb2779ecd8940a8a6f70e1b88fbea99542.scope: Deactivated successfully.
Dec  3 19:06:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 341 B/s wr, 70 op/s
Dec  3 19:06:20 compute-0 podman[459878]: 2025-12-03 19:06:20.06487926 +0000 UTC m=+0.082894045 container create 9114f7a89a2de69aa0378b5a507393a4f02418edc35d72b5ccdd4b9f5b1efd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:06:20 compute-0 podman[459878]: 2025-12-03 19:06:20.031236318 +0000 UTC m=+0.049251123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:06:20 compute-0 systemd[1]: Started libpod-conmon-9114f7a89a2de69aa0378b5a507393a4f02418edc35d72b5ccdd4b9f5b1efd3c.scope.
Dec  3 19:06:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0bf03d45cebc88c6b48150d6501f46b2603c659ea54d0811bffef1061435c6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0bf03d45cebc88c6b48150d6501f46b2603c659ea54d0811bffef1061435c6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0bf03d45cebc88c6b48150d6501f46b2603c659ea54d0811bffef1061435c6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0bf03d45cebc88c6b48150d6501f46b2603c659ea54d0811bffef1061435c6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:06:20 compute-0 podman[459878]: 2025-12-03 19:06:20.252037668 +0000 UTC m=+0.270052513 container init 9114f7a89a2de69aa0378b5a507393a4f02418edc35d72b5ccdd4b9f5b1efd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:06:20 compute-0 podman[459878]: 2025-12-03 19:06:20.267865634 +0000 UTC m=+0.285880419 container start 9114f7a89a2de69aa0378b5a507393a4f02418edc35d72b5ccdd4b9f5b1efd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:06:20 compute-0 podman[459878]: 2025-12-03 19:06:20.282174983 +0000 UTC m=+0.300189798 container attach 9114f7a89a2de69aa0378b5a507393a4f02418edc35d72b5ccdd4b9f5b1efd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:06:20 compute-0 nova_compute[348325]: 2025-12-03 19:06:20.378 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]: {
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "osd_id": 1,
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "type": "bluestore"
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:    },
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "osd_id": 2,
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "type": "bluestore"
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:    },
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "osd_id": 0,
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:        "type": "bluestore"
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]:    }
Dec  3 19:06:21 compute-0 vigilant_shtern[459894]: }
Dec  3 19:06:21 compute-0 systemd[1]: libpod-9114f7a89a2de69aa0378b5a507393a4f02418edc35d72b5ccdd4b9f5b1efd3c.scope: Deactivated successfully.
Dec  3 19:06:21 compute-0 systemd[1]: libpod-9114f7a89a2de69aa0378b5a507393a4f02418edc35d72b5ccdd4b9f5b1efd3c.scope: Consumed 1.223s CPU time.
Dec  3 19:06:21 compute-0 conmon[459894]: conmon 9114f7a89a2de69aa037 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9114f7a89a2de69aa0378b5a507393a4f02418edc35d72b5ccdd4b9f5b1efd3c.scope/container/memory.events
Dec  3 19:06:21 compute-0 podman[459878]: 2025-12-03 19:06:21.499563327 +0000 UTC m=+1.517578142 container died 9114f7a89a2de69aa0378b5a507393a4f02418edc35d72b5ccdd4b9f5b1efd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 19:06:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0bf03d45cebc88c6b48150d6501f46b2603c659ea54d0811bffef1061435c6b-merged.mount: Deactivated successfully.
Dec  3 19:06:21 compute-0 podman[459878]: 2025-12-03 19:06:21.609996012 +0000 UTC m=+1.628010807 container remove 9114f7a89a2de69aa0378b5a507393a4f02418edc35d72b5ccdd4b9f5b1efd3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:06:21 compute-0 systemd[1]: libpod-conmon-9114f7a89a2de69aa0378b5a507393a4f02418edc35d72b5ccdd4b9f5b1efd3c.scope: Deactivated successfully.
Dec  3 19:06:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:06:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:06:21 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:06:21 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:06:21 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e458ec97-67a9-464e-9d36-af4ccab65ab8 does not exist
Dec  3 19:06:21 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev bada0615-10c5-4cde-bd37-b6d70fd7865e does not exist
Dec  3 19:06:21 compute-0 nova_compute[348325]: 2025-12-03 19:06:21.925 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 341 B/s wr, 61 op/s
Dec  3 19:06:22 compute-0 nova_compute[348325]: 2025-12-03 19:06:22.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:06:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:06:22 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:06:23 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 19:06:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:06:23.364 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:06:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:06:23.365 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:06:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:06:23.366 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:06:23 compute-0 nova_compute[348325]: 2025-12-03 19:06:23.648 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 341 B/s wr, 42 op/s
Dec  3 19:06:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015186096934622648 of space, bias 1.0, pg target 0.45558290803867946 quantized to 32 (current 32)
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:06:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:06:25 compute-0 nova_compute[348325]: 2025-12-03 19:06:25.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:06:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 341 B/s wr, 22 op/s
Dec  3 19:06:26 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 19:06:26 compute-0 nova_compute[348325]: 2025-12-03 19:06:26.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:06:26 compute-0 nova_compute[348325]: 2025-12-03 19:06:26.931 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 341 B/s wr, 3 op/s
Dec  3 19:06:28 compute-0 nova_compute[348325]: 2025-12-03 19:06:28.654 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:29 compute-0 nova_compute[348325]: 2025-12-03 19:06:29.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:06:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:29 compute-0 podman[459992]: 2025-12-03 19:06:29.638944811 +0000 UTC m=+0.092415716 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 19:06:29 compute-0 podman[459991]: 2025-12-03 19:06:29.675547855 +0000 UTC m=+0.125540756 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 19:06:29 compute-0 podman[459993]: 2025-12-03 19:06:29.685431386 +0000 UTC m=+0.129522462 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, com.redhat.component=ubi9-minimal-container, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Dec  3 19:06:29 compute-0 podman[158200]: time="2025-12-03T19:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:06:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:06:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8652 "" "Go-http-client/1.1"
Dec  3 19:06:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:30 compute-0 nova_compute[348325]: 2025-12-03 19:06:30.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:06:31 compute-0 openstack_network_exporter[365222]: ERROR   19:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:06:31 compute-0 openstack_network_exporter[365222]: ERROR   19:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:06:31 compute-0 openstack_network_exporter[365222]: ERROR   19:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:06:31 compute-0 openstack_network_exporter[365222]: ERROR   19:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:06:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:06:31 compute-0 openstack_network_exporter[365222]: ERROR   19:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:06:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:06:31 compute-0 nova_compute[348325]: 2025-12-03 19:06:31.934 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:32 compute-0 nova_compute[348325]: 2025-12-03 19:06:32.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:06:32 compute-0 nova_compute[348325]: 2025-12-03 19:06:32.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:06:32 compute-0 nova_compute[348325]: 2025-12-03 19:06:32.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:06:33 compute-0 nova_compute[348325]: 2025-12-03 19:06:33.293 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:06:33 compute-0 nova_compute[348325]: 2025-12-03 19:06:33.294 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:06:33 compute-0 nova_compute[348325]: 2025-12-03 19:06:33.294 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:06:33 compute-0 nova_compute[348325]: 2025-12-03 19:06:33.294 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:06:33 compute-0 nova_compute[348325]: 2025-12-03 19:06:33.659 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:33 compute-0 podman[460051]: 2025-12-03 19:06:33.927401763 +0000 UTC m=+0.082983127 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 19:06:33 compute-0 podman[460052]: 2025-12-03 19:06:33.931210515 +0000 UTC m=+0.082848122 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:06:33 compute-0 podman[460050]: 2025-12-03 19:06:33.977485725 +0000 UTC m=+0.131079020 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, config_id=edpm, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, name=ubi9)
Dec  3 19:06:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:35 compute-0 nova_compute[348325]: 2025-12-03 19:06:35.322 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updating instance_info_cache with network_info: [{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:06:35 compute-0 nova_compute[348325]: 2025-12-03 19:06:35.338 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:06:35 compute-0 nova_compute[348325]: 2025-12-03 19:06:35.339 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:06:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:36 compute-0 nova_compute[348325]: 2025-12-03 19:06:36.938 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:37 compute-0 nova_compute[348325]: 2025-12-03 19:06:37.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:06:37 compute-0 nova_compute[348325]: 2025-12-03 19:06:37.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:06:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:06:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3739453274' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:06:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:06:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3739453274' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:06:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:38 compute-0 nova_compute[348325]: 2025-12-03 19:06:38.662 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:41 compute-0 nova_compute[348325]: 2025-12-03 19:06:41.942 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:42 compute-0 nova_compute[348325]: 2025-12-03 19:06:42.480 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:06:43 compute-0 nova_compute[348325]: 2025-12-03 19:06:43.665 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:06:43 compute-0 podman[460106]: 2025-12-03 19:06:43.993011892 +0000 UTC m=+0.146326643 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 19:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:06:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:06:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:44 compute-0 nova_compute[348325]: 2025-12-03 19:06:44.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:06:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:44 compute-0 nova_compute[348325]: 2025-12-03 19:06:44.643 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:06:44 compute-0 nova_compute[348325]: 2025-12-03 19:06:44.644 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:06:44 compute-0 nova_compute[348325]: 2025-12-03 19:06:44.645 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:06:44 compute-0 nova_compute[348325]: 2025-12-03 19:06:44.648 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:06:44 compute-0 nova_compute[348325]: 2025-12-03 19:06:44.650 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:06:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:06:45 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/564167334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.199 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.303 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.304 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.311 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.312 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.753 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.755 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3577MB free_disk=59.89718246459961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.756 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.756 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.862 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.863 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a364994c-8442-4a4c-bd6b-f3a2d31e4483 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.864 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.866 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:06:45 compute-0 nova_compute[348325]: 2025-12-03 19:06:45.935 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:06:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:06:46 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/159782817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:06:46 compute-0 nova_compute[348325]: 2025-12-03 19:06:46.411 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:06:46 compute-0 nova_compute[348325]: 2025-12-03 19:06:46.421 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:06:46 compute-0 nova_compute[348325]: 2025-12-03 19:06:46.446 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:06:46 compute-0 nova_compute[348325]: 2025-12-03 19:06:46.450 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:06:46 compute-0 nova_compute[348325]: 2025-12-03 19:06:46.451 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:06:46 compute-0 nova_compute[348325]: 2025-12-03 19:06:46.947 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:47 compute-0 podman[460174]: 2025-12-03 19:06:47.984052914 +0000 UTC m=+0.125778401 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  3 19:06:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:48 compute-0 podman[460173]: 2025-12-03 19:06:48.037005037 +0000 UTC m=+0.183883619 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 19:06:48 compute-0 nova_compute[348325]: 2025-12-03 19:06:48.665 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:51 compute-0 nova_compute[348325]: 2025-12-03 19:06:51.951 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:53 compute-0 nova_compute[348325]: 2025-12-03 19:06:53.669 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:56 compute-0 nova_compute[348325]: 2025-12-03 19:06:56.955 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:06:58 compute-0 nova_compute[348325]: 2025-12-03 19:06:58.671 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:06:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:06:59 compute-0 podman[158200]: time="2025-12-03T19:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:06:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:06:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8635 "" "Go-http-client/1.1"
Dec  3 19:06:59 compute-0 podman[460223]: 2025-12-03 19:06:59.926300648 +0000 UTC m=+0.082575936 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 19:06:59 compute-0 podman[460222]: 2025-12-03 19:06:59.937891271 +0000 UTC m=+0.090684524 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 19:06:59 compute-0 podman[460224]: 2025-12-03 19:06:59.969641546 +0000 UTC m=+0.120313797 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Dec  3 19:07:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:01 compute-0 openstack_network_exporter[365222]: ERROR   19:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:07:01 compute-0 openstack_network_exporter[365222]: ERROR   19:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:07:01 compute-0 openstack_network_exporter[365222]: ERROR   19:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:07:01 compute-0 openstack_network_exporter[365222]: ERROR   19:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:07:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:07:01 compute-0 openstack_network_exporter[365222]: ERROR   19:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:07:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:07:01 compute-0 nova_compute[348325]: 2025-12-03 19:07:01.958 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:03 compute-0 nova_compute[348325]: 2025-12-03 19:07:03.676 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:04 compute-0 podman[460284]: 2025-12-03 19:07:04.963368252 +0000 UTC m=+0.117576981 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 19:07:04 compute-0 podman[460283]: 2025-12-03 19:07:04.972412782 +0000 UTC m=+0.136345748 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 19:07:05 compute-0 podman[460282]: 2025-12-03 19:07:05.008508153 +0000 UTC m=+0.170949493 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, version=9.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec  3 19:07:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:06 compute-0 nova_compute[348325]: 2025-12-03 19:07:06.961 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:08 compute-0 nova_compute[348325]: 2025-12-03 19:07:08.677 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:11 compute-0 nova_compute[348325]: 2025-12-03 19:07:11.964 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.256 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.256 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.256 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.270 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a4fc45c7-44e4-4b50-a3e0-98de13268f88', 'name': 'te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.276 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a364994c-8442-4a4c-bd6b-f3a2d31e4483', 'name': 'te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.277 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.277 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.277 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.278 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T19:07:13.277914) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.286 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.294 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.295 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.296 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T19:07:13.296749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.297 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.297 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.298 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.303 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T19:07:13.299417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.304 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.305 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T19:07:13.306055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.306 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.307 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.308 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.309 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T19:07:13.309445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.310 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.311 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T19:07:13.312387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.332 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.333 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.352 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.353 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.354 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.354 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.355 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.355 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.355 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.356 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T19:07:13.355754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.356 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.357 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.358 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T19:07:13.358699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.388 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/memory.usage volume: 42.515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.416 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/memory.usage volume: 43.6796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.417 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.417 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.417 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.417 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.418 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.418 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T19:07:13.418234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.418 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.419 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.420 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.420 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.420 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.421 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T19:07:13.421010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.421 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.422 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.422 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.423 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.424 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.424 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T19:07:13.425167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.462 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 30382592 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.463 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.508 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.bytes volume: 30284800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.509 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.510 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.510 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 1826201908 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.511 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 148336564 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.511 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.latency volume: 2339550092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.512 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.latency volume: 154099871 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.512 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T19:07:13.510729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.513 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.513 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.514 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.requests volume: 1098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.514 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.515 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T19:07:13.513395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.516 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T19:07:13.516042) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.516 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.516 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.516 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.516 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.517 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.518 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.518 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.518 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.518 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.518 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T19:07:13.518322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.518 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.519 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.519 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.520 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.520 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.521 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.521 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T19:07:13.521542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.521 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.522 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.522 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.523 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.523 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.523 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.523 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.523 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 9240506883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.524 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.524 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.latency volume: 7366670257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.524 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.525 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.526 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 340 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.526 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T19:07:13.523541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T19:07:13.526124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.527 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.requests volume: 277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.527 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.528 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.528 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.529 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T19:07:13.528506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.530 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T19:07:13.530355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.531 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.532 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.532 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T19:07:13.531842) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.533 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.533 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.533 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.534 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.534 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T19:07:13.533620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.535 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.535 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.536 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.536 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.536 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T19:07:13.535120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.536 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.537 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/cpu volume: 332610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.537 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/cpu volume: 166200000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T19:07:13.537059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:07:13.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:07:13 compute-0 nova_compute[348325]: 2025-12-03 19:07:13.676 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:07:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:07:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:07:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:07:14
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'backups', 'volumes', '.mgr']
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:07:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:07:14 compute-0 podman[460338]: 2025-12-03 19:07:14.791349771 +0000 UTC m=+0.094185380 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 19:07:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:16 compute-0 nova_compute[348325]: 2025-12-03 19:07:16.968 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:18 compute-0 nova_compute[348325]: 2025-12-03 19:07:18.679 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:18 compute-0 podman[460364]: 2025-12-03 19:07:18.902843974 +0000 UTC m=+0.067877488 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  3 19:07:18 compute-0 podman[460363]: 2025-12-03 19:07:18.95922484 +0000 UTC m=+0.128579920 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Dec  3 19:07:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:20 compute-0 nova_compute[348325]: 2025-12-03 19:07:20.443 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:07:21 compute-0 nova_compute[348325]: 2025-12-03 19:07:21.971 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:22 compute-0 nova_compute[348325]: 2025-12-03 19:07:22.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:07:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:07:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:07:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:07:23.365 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:07:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:07:23.366 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:07:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:07:23.369 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:07:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d800c0cc-4345-4bd8-bf53-e13a033617db does not exist
Dec  3 19:07:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 937c78bf-3d54-42b1-b5ad-f1285883bb80 does not exist
Dec  3 19:07:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 2672d0ca-8089-46e0-81b9-9307f25b3598 does not exist
Dec  3 19:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:07:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:07:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:07:23 compute-0 nova_compute[348325]: 2025-12-03 19:07:23.684 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:07:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:07:24 compute-0 podman[460799]: 2025-12-03 19:07:24.631062886 +0000 UTC m=+0.066241378 container create d6a626e266e10a218e4b72e63eb594041ea2436bf125a7fd11674b6940586f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015186096934622648 of space, bias 1.0, pg target 0.45558290803867946 quantized to 32 (current 32)
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:07:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:07:24 compute-0 podman[460799]: 2025-12-03 19:07:24.604561589 +0000 UTC m=+0.039740121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:24 compute-0 systemd[1]: Started libpod-conmon-d6a626e266e10a218e4b72e63eb594041ea2436bf125a7fd11674b6940586f92.scope.
Dec  3 19:07:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:24 compute-0 podman[460799]: 2025-12-03 19:07:24.76850269 +0000 UTC m=+0.203681202 container init d6a626e266e10a218e4b72e63eb594041ea2436bf125a7fd11674b6940586f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:07:24 compute-0 podman[460799]: 2025-12-03 19:07:24.777753187 +0000 UTC m=+0.212931689 container start d6a626e266e10a218e4b72e63eb594041ea2436bf125a7fd11674b6940586f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Dec  3 19:07:24 compute-0 podman[460799]: 2025-12-03 19:07:24.782996694 +0000 UTC m=+0.218175196 container attach d6a626e266e10a218e4b72e63eb594041ea2436bf125a7fd11674b6940586f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 19:07:24 compute-0 naughty_swirles[460814]: 167 167
Dec  3 19:07:24 compute-0 systemd[1]: libpod-d6a626e266e10a218e4b72e63eb594041ea2436bf125a7fd11674b6940586f92.scope: Deactivated successfully.
Dec  3 19:07:24 compute-0 conmon[460814]: conmon d6a626e266e10a218e4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d6a626e266e10a218e4b72e63eb594041ea2436bf125a7fd11674b6940586f92.scope/container/memory.events
Dec  3 19:07:24 compute-0 podman[460819]: 2025-12-03 19:07:24.857762869 +0000 UTC m=+0.051216371 container died d6a626e266e10a218e4b72e63eb594041ea2436bf125a7fd11674b6940586f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:07:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5012e9ee165a27f605effe8a5ba0abb094a26cd1f44ee5eca5c63edb15795d9d-merged.mount: Deactivated successfully.
Dec  3 19:07:24 compute-0 podman[460819]: 2025-12-03 19:07:24.915524729 +0000 UTC m=+0.108978191 container remove d6a626e266e10a218e4b72e63eb594041ea2436bf125a7fd11674b6940586f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 19:07:24 compute-0 systemd[1]: libpod-conmon-d6a626e266e10a218e4b72e63eb594041ea2436bf125a7fd11674b6940586f92.scope: Deactivated successfully.
Dec  3 19:07:25 compute-0 podman[460841]: 2025-12-03 19:07:25.13475593 +0000 UTC m=+0.054648585 container create adae90c597afae2e4b4019c97adea6bacf37eb05aad2adf473065109249f791e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec  3 19:07:25 compute-0 systemd[1]: Started libpod-conmon-adae90c597afae2e4b4019c97adea6bacf37eb05aad2adf473065109249f791e.scope.
Dec  3 19:07:25 compute-0 podman[460841]: 2025-12-03 19:07:25.11303618 +0000 UTC m=+0.032928835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013cff90a6a32eebcf534362b33e2706f9429e3aed0f1fec7911286ff93be4aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013cff90a6a32eebcf534362b33e2706f9429e3aed0f1fec7911286ff93be4aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013cff90a6a32eebcf534362b33e2706f9429e3aed0f1fec7911286ff93be4aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013cff90a6a32eebcf534362b33e2706f9429e3aed0f1fec7911286ff93be4aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013cff90a6a32eebcf534362b33e2706f9429e3aed0f1fec7911286ff93be4aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:25 compute-0 podman[460841]: 2025-12-03 19:07:25.246875467 +0000 UTC m=+0.166768162 container init adae90c597afae2e4b4019c97adea6bacf37eb05aad2adf473065109249f791e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 19:07:25 compute-0 podman[460841]: 2025-12-03 19:07:25.258696565 +0000 UTC m=+0.178589250 container start adae90c597afae2e4b4019c97adea6bacf37eb05aad2adf473065109249f791e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 19:07:25 compute-0 podman[460841]: 2025-12-03 19:07:25.265119802 +0000 UTC m=+0.185012537 container attach adae90c597afae2e4b4019c97adea6bacf37eb05aad2adf473065109249f791e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:07:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:26 compute-0 vibrant_cohen[460858]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:07:26 compute-0 vibrant_cohen[460858]: --> relative data size: 1.0
Dec  3 19:07:26 compute-0 vibrant_cohen[460858]: --> All data devices are unavailable
Dec  3 19:07:26 compute-0 systemd[1]: libpod-adae90c597afae2e4b4019c97adea6bacf37eb05aad2adf473065109249f791e.scope: Deactivated successfully.
Dec  3 19:07:26 compute-0 systemd[1]: libpod-adae90c597afae2e4b4019c97adea6bacf37eb05aad2adf473065109249f791e.scope: Consumed 1.104s CPU time.
Dec  3 19:07:26 compute-0 podman[460841]: 2025-12-03 19:07:26.426003257 +0000 UTC m=+1.345895932 container died adae90c597afae2e4b4019c97adea6bacf37eb05aad2adf473065109249f791e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:07:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-013cff90a6a32eebcf534362b33e2706f9429e3aed0f1fec7911286ff93be4aa-merged.mount: Deactivated successfully.
Dec  3 19:07:26 compute-0 podman[460841]: 2025-12-03 19:07:26.54701426 +0000 UTC m=+1.466906915 container remove adae90c597afae2e4b4019c97adea6bacf37eb05aad2adf473065109249f791e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:07:26 compute-0 systemd[1]: libpod-conmon-adae90c597afae2e4b4019c97adea6bacf37eb05aad2adf473065109249f791e.scope: Deactivated successfully.
Dec  3 19:07:26 compute-0 nova_compute[348325]: 2025-12-03 19:07:26.975 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:27 compute-0 nova_compute[348325]: 2025-12-03 19:07:27.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:07:27 compute-0 nova_compute[348325]: 2025-12-03 19:07:27.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:07:27 compute-0 podman[461037]: 2025-12-03 19:07:27.637054926 +0000 UTC m=+0.075839402 container create 15a7247f49ba75132588573d45dc14c3bf0390486fdaea5702f1043fd5a19701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 19:07:27 compute-0 podman[461037]: 2025-12-03 19:07:27.602681946 +0000 UTC m=+0.041466452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:27 compute-0 systemd[1]: Started libpod-conmon-15a7247f49ba75132588573d45dc14c3bf0390486fdaea5702f1043fd5a19701.scope.
Dec  3 19:07:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:27 compute-0 podman[461037]: 2025-12-03 19:07:27.795044452 +0000 UTC m=+0.233828938 container init 15a7247f49ba75132588573d45dc14c3bf0390486fdaea5702f1043fd5a19701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 19:07:27 compute-0 podman[461037]: 2025-12-03 19:07:27.806968183 +0000 UTC m=+0.245752649 container start 15a7247f49ba75132588573d45dc14c3bf0390486fdaea5702f1043fd5a19701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:07:27 compute-0 systemd[1]: libpod-15a7247f49ba75132588573d45dc14c3bf0390486fdaea5702f1043fd5a19701.scope: Deactivated successfully.
Dec  3 19:07:27 compute-0 funny_napier[461051]: 167 167
Dec  3 19:07:27 compute-0 conmon[461051]: conmon 15a7247f49ba75132588 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15a7247f49ba75132588573d45dc14c3bf0390486fdaea5702f1043fd5a19701.scope/container/memory.events
Dec  3 19:07:27 compute-0 podman[461037]: 2025-12-03 19:07:27.822055721 +0000 UTC m=+0.260840277 container attach 15a7247f49ba75132588573d45dc14c3bf0390486fdaea5702f1043fd5a19701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:07:27 compute-0 podman[461037]: 2025-12-03 19:07:27.82325783 +0000 UTC m=+0.262042296 container died 15a7247f49ba75132588573d45dc14c3bf0390486fdaea5702f1043fd5a19701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-066793ec9464846e0f2632e4f9bb6c5f7e8d74010b7b9b344869fcdf20e6922d-merged.mount: Deactivated successfully.
Dec  3 19:07:27 compute-0 podman[461037]: 2025-12-03 19:07:27.877753021 +0000 UTC m=+0.316537477 container remove 15a7247f49ba75132588573d45dc14c3bf0390486fdaea5702f1043fd5a19701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 19:07:27 compute-0 systemd[1]: libpod-conmon-15a7247f49ba75132588573d45dc14c3bf0390486fdaea5702f1043fd5a19701.scope: Deactivated successfully.
Dec  3 19:07:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:28 compute-0 podman[461075]: 2025-12-03 19:07:28.09653767 +0000 UTC m=+0.067435636 container create c2453ffb86f87368e2a5cc2e7883a44c01177d402c95fe2dd3cec257c8703d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 19:07:28 compute-0 systemd[1]: Started libpod-conmon-c2453ffb86f87368e2a5cc2e7883a44c01177d402c95fe2dd3cec257c8703d55.scope.
Dec  3 19:07:28 compute-0 podman[461075]: 2025-12-03 19:07:28.070824143 +0000 UTC m=+0.041722059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a18555d046cd7af5e5513a3cb841038ab77643d907d39274f27971a01161b41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a18555d046cd7af5e5513a3cb841038ab77643d907d39274f27971a01161b41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a18555d046cd7af5e5513a3cb841038ab77643d907d39274f27971a01161b41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a18555d046cd7af5e5513a3cb841038ab77643d907d39274f27971a01161b41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:28 compute-0 podman[461075]: 2025-12-03 19:07:28.24318285 +0000 UTC m=+0.214080846 container init c2453ffb86f87368e2a5cc2e7883a44c01177d402c95fe2dd3cec257c8703d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 19:07:28 compute-0 podman[461075]: 2025-12-03 19:07:28.262133582 +0000 UTC m=+0.233031508 container start c2453ffb86f87368e2a5cc2e7883a44c01177d402c95fe2dd3cec257c8703d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:07:28 compute-0 podman[461075]: 2025-12-03 19:07:28.267423091 +0000 UTC m=+0.238321097 container attach c2453ffb86f87368e2a5cc2e7883a44c01177d402c95fe2dd3cec257c8703d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:07:28 compute-0 nova_compute[348325]: 2025-12-03 19:07:28.684 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:29 compute-0 romantic_wiles[461092]: {
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:    "0": [
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:        {
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "devices": [
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "/dev/loop3"
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            ],
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_name": "ceph_lv0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_size": "21470642176",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "name": "ceph_lv0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "tags": {
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.cluster_name": "ceph",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.crush_device_class": "",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.encrypted": "0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.osd_id": "0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.type": "block",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.vdo": "0"
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            },
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "type": "block",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "vg_name": "ceph_vg0"
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:        }
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:    ],
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:    "1": [
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:        {
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "devices": [
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "/dev/loop4"
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            ],
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_name": "ceph_lv1",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_size": "21470642176",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "name": "ceph_lv1",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "tags": {
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.cluster_name": "ceph",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.crush_device_class": "",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.encrypted": "0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.osd_id": "1",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.type": "block",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.vdo": "0"
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            },
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "type": "block",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "vg_name": "ceph_vg1"
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:        }
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:    ],
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:    "2": [
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:        {
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "devices": [
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "/dev/loop5"
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            ],
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_name": "ceph_lv2",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_size": "21470642176",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "name": "ceph_lv2",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "tags": {
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.cluster_name": "ceph",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.crush_device_class": "",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.encrypted": "0",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.osd_id": "2",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.type": "block",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:                "ceph.vdo": "0"
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            },
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "type": "block",
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:            "vg_name": "ceph_vg2"
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:        }
Dec  3 19:07:29 compute-0 romantic_wiles[461092]:    ]
Dec  3 19:07:29 compute-0 romantic_wiles[461092]: }
Dec  3 19:07:29 compute-0 systemd[1]: libpod-c2453ffb86f87368e2a5cc2e7883a44c01177d402c95fe2dd3cec257c8703d55.scope: Deactivated successfully.
Dec  3 19:07:29 compute-0 podman[461075]: 2025-12-03 19:07:29.11011268 +0000 UTC m=+1.081010636 container died c2453ffb86f87368e2a5cc2e7883a44c01177d402c95fe2dd3cec257c8703d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 19:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a18555d046cd7af5e5513a3cb841038ab77643d907d39274f27971a01161b41-merged.mount: Deactivated successfully.
Dec  3 19:07:29 compute-0 podman[461075]: 2025-12-03 19:07:29.203331205 +0000 UTC m=+1.174229131 container remove c2453ffb86f87368e2a5cc2e7883a44c01177d402c95fe2dd3cec257c8703d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 19:07:29 compute-0 systemd[1]: libpod-conmon-c2453ffb86f87368e2a5cc2e7883a44c01177d402c95fe2dd3cec257c8703d55.scope: Deactivated successfully.
Dec  3 19:07:29 compute-0 nova_compute[348325]: 2025-12-03 19:07:29.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:07:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:29 compute-0 podman[158200]: time="2025-12-03T19:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:07:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:07:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8655 "" "Go-http-client/1.1"
Dec  3 19:07:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:30 compute-0 podman[461250]: 2025-12-03 19:07:30.111532972 +0000 UTC m=+0.055230349 container create e4746763b56d2f430776c75f7d2e4f829521c8522cf5d059a852a601352afb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:07:30 compute-0 systemd[1]: Started libpod-conmon-e4746763b56d2f430776c75f7d2e4f829521c8522cf5d059a852a601352afb7d.scope.
Dec  3 19:07:30 compute-0 podman[461250]: 2025-12-03 19:07:30.093621715 +0000 UTC m=+0.037319112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:30 compute-0 podman[461250]: 2025-12-03 19:07:30.215671124 +0000 UTC m=+0.159368511 container init e4746763b56d2f430776c75f7d2e4f829521c8522cf5d059a852a601352afb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:07:30 compute-0 podman[461250]: 2025-12-03 19:07:30.224847248 +0000 UTC m=+0.168544615 container start e4746763b56d2f430776c75f7d2e4f829521c8522cf5d059a852a601352afb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:07:30 compute-0 podman[461250]: 2025-12-03 19:07:30.229506421 +0000 UTC m=+0.173203818 container attach e4746763b56d2f430776c75f7d2e4f829521c8522cf5d059a852a601352afb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:07:30 compute-0 determined_austin[461278]: 167 167
Dec  3 19:07:30 compute-0 systemd[1]: libpod-e4746763b56d2f430776c75f7d2e4f829521c8522cf5d059a852a601352afb7d.scope: Deactivated successfully.
Dec  3 19:07:30 compute-0 podman[461250]: 2025-12-03 19:07:30.233579461 +0000 UTC m=+0.177276858 container died e4746763b56d2f430776c75f7d2e4f829521c8522cf5d059a852a601352afb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:07:30 compute-0 podman[461267]: 2025-12-03 19:07:30.266959266 +0000 UTC m=+0.096385754 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6)
Dec  3 19:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-67c40c6c0d46898594ae23eccd464b3609a2dcd6cfc79ff80e4389aad88703e2-merged.mount: Deactivated successfully.
Dec  3 19:07:30 compute-0 podman[461250]: 2025-12-03 19:07:30.286756909 +0000 UTC m=+0.230454266 container remove e4746763b56d2f430776c75f7d2e4f829521c8522cf5d059a852a601352afb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 19:07:30 compute-0 podman[461263]: 2025-12-03 19:07:30.29866619 +0000 UTC m=+0.134805101 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:07:30 compute-0 systemd[1]: libpod-conmon-e4746763b56d2f430776c75f7d2e4f829521c8522cf5d059a852a601352afb7d.scope: Deactivated successfully.
Dec  3 19:07:30 compute-0 podman[461266]: 2025-12-03 19:07:30.309633058 +0000 UTC m=+0.144450857 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:07:30 compute-0 podman[461350]: 2025-12-03 19:07:30.485683454 +0000 UTC m=+0.060934398 container create 1b931b7fa8a97b8865c9362b9d945f988acb0398582ba8c108d783c2d0806bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_easley, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 19:07:30 compute-0 nova_compute[348325]: 2025-12-03 19:07:30.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:07:30 compute-0 podman[461350]: 2025-12-03 19:07:30.458766607 +0000 UTC m=+0.034017641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:30 compute-0 systemd[1]: Started libpod-conmon-1b931b7fa8a97b8865c9362b9d945f988acb0398582ba8c108d783c2d0806bf5.scope.
Dec  3 19:07:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afaf68d5f2d5d18587b7683a121b5bda894f74c5f25014c6f5a61c94a093fe72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afaf68d5f2d5d18587b7683a121b5bda894f74c5f25014c6f5a61c94a093fe72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afaf68d5f2d5d18587b7683a121b5bda894f74c5f25014c6f5a61c94a093fe72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afaf68d5f2d5d18587b7683a121b5bda894f74c5f25014c6f5a61c94a093fe72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:30 compute-0 podman[461350]: 2025-12-03 19:07:30.637005217 +0000 UTC m=+0.212256261 container init 1b931b7fa8a97b8865c9362b9d945f988acb0398582ba8c108d783c2d0806bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:07:30 compute-0 podman[461350]: 2025-12-03 19:07:30.656863293 +0000 UTC m=+0.232114277 container start 1b931b7fa8a97b8865c9362b9d945f988acb0398582ba8c108d783c2d0806bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_easley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:07:30 compute-0 podman[461350]: 2025-12-03 19:07:30.663432082 +0000 UTC m=+0.238683076 container attach 1b931b7fa8a97b8865c9362b9d945f988acb0398582ba8c108d783c2d0806bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_easley, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  3 19:07:31 compute-0 openstack_network_exporter[365222]: ERROR   19:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:07:31 compute-0 openstack_network_exporter[365222]: ERROR   19:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:07:31 compute-0 openstack_network_exporter[365222]: ERROR   19:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:07:31 compute-0 openstack_network_exporter[365222]: ERROR   19:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:07:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:07:31 compute-0 openstack_network_exporter[365222]: ERROR   19:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:07:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:07:31 compute-0 sleepy_easley[461366]: {
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "osd_id": 1,
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "type": "bluestore"
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:    },
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "osd_id": 2,
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "type": "bluestore"
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:    },
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "osd_id": 0,
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:        "type": "bluestore"
Dec  3 19:07:31 compute-0 sleepy_easley[461366]:    }
Dec  3 19:07:31 compute-0 sleepy_easley[461366]: }
Dec  3 19:07:31 compute-0 systemd[1]: libpod-1b931b7fa8a97b8865c9362b9d945f988acb0398582ba8c108d783c2d0806bf5.scope: Deactivated successfully.
Dec  3 19:07:31 compute-0 podman[461350]: 2025-12-03 19:07:31.846713674 +0000 UTC m=+1.421964638 container died 1b931b7fa8a97b8865c9362b9d945f988acb0398582ba8c108d783c2d0806bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_easley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 19:07:31 compute-0 systemd[1]: libpod-1b931b7fa8a97b8865c9362b9d945f988acb0398582ba8c108d783c2d0806bf5.scope: Consumed 1.189s CPU time.
Dec  3 19:07:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-afaf68d5f2d5d18587b7683a121b5bda894f74c5f25014c6f5a61c94a093fe72-merged.mount: Deactivated successfully.
Dec  3 19:07:31 compute-0 podman[461350]: 2025-12-03 19:07:31.929018333 +0000 UTC m=+1.504269277 container remove 1b931b7fa8a97b8865c9362b9d945f988acb0398582ba8c108d783c2d0806bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_easley, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:07:31 compute-0 systemd[1]: libpod-conmon-1b931b7fa8a97b8865c9362b9d945f988acb0398582ba8c108d783c2d0806bf5.scope: Deactivated successfully.
Dec  3 19:07:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:07:31 compute-0 nova_compute[348325]: 2025-12-03 19:07:31.980 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:07:32 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:32 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 5c67ca0f-8ea5-4815-a6f0-459f4e0f8753 does not exist
Dec  3 19:07:32 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4fcde0dd-fea0-4924-8bab-f19094634fca does not exist
Dec  3 19:07:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:33 compute-0 nova_compute[348325]: 2025-12-03 19:07:33.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:07:33 compute-0 nova_compute[348325]: 2025-12-03 19:07:33.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:07:33 compute-0 nova_compute[348325]: 2025-12-03 19:07:33.689 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:34 compute-0 nova_compute[348325]: 2025-12-03 19:07:34.237 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:07:34 compute-0 nova_compute[348325]: 2025-12-03 19:07:34.238 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:07:34 compute-0 nova_compute[348325]: 2025-12-03 19:07:34.238 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:07:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:35 compute-0 podman[461463]: 2025-12-03 19:07:35.966710734 +0000 UTC m=+0.117770455 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true)
Dec  3 19:07:35 compute-0 podman[461464]: 2025-12-03 19:07:35.967482073 +0000 UTC m=+0.101445547 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 19:07:35 compute-0 podman[461462]: 2025-12-03 19:07:35.996796769 +0000 UTC m=+0.143734890 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9, config_id=edpm, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64)
Dec  3 19:07:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:36 compute-0 nova_compute[348325]: 2025-12-03 19:07:36.559 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updating instance_info_cache with network_info: [{"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:07:36 compute-0 nova_compute[348325]: 2025-12-03 19:07:36.584 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:07:36 compute-0 nova_compute[348325]: 2025-12-03 19:07:36.585 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:07:36 compute-0 nova_compute[348325]: 2025-12-03 19:07:36.983 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:37 compute-0 nova_compute[348325]: 2025-12-03 19:07:37.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:07:37 compute-0 nova_compute[348325]: 2025-12-03 19:07:37.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:07:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:07:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3042975801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:07:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:07:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3042975801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:07:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:38 compute-0 nova_compute[348325]: 2025-12-03 19:07:38.690 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  3 19:07:39 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  3 19:07:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:07:39 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:07:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:07:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:07:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:07:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:39 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f10a0196-7279-42f7-a312-517e4c8c1081 does not exist
Dec  3 19:07:39 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d9f1119c-9d77-4091-9088-83065d6312c4 does not exist
Dec  3 19:07:39 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4608d417-c1b4-4e00-bb3a-a1520b98d635 does not exist
Dec  3 19:07:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:07:39 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:07:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:07:39 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:07:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:07:39 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:07:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:07:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:40 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:07:40 compute-0 podman[461655]: 2025-12-03 19:07:40.810414798 +0000 UTC m=+0.050321048 container create ebc16bfd6724123f63b2bf8b0b73226d62d40b52699c2f1fe174a85e30f01a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 19:07:40 compute-0 systemd[1]: Started libpod-conmon-ebc16bfd6724123f63b2bf8b0b73226d62d40b52699c2f1fe174a85e30f01a0b.scope.
Dec  3 19:07:40 compute-0 podman[461655]: 2025-12-03 19:07:40.792770728 +0000 UTC m=+0.032676998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:40 compute-0 podman[461655]: 2025-12-03 19:07:40.916784184 +0000 UTC m=+0.156690454 container init ebc16bfd6724123f63b2bf8b0b73226d62d40b52699c2f1fe174a85e30f01a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:07:40 compute-0 podman[461655]: 2025-12-03 19:07:40.928826378 +0000 UTC m=+0.168732628 container start ebc16bfd6724123f63b2bf8b0b73226d62d40b52699c2f1fe174a85e30f01a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:07:40 compute-0 podman[461655]: 2025-12-03 19:07:40.932964749 +0000 UTC m=+0.172871159 container attach ebc16bfd6724123f63b2bf8b0b73226d62d40b52699c2f1fe174a85e30f01a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:07:40 compute-0 systemd[1]: libpod-ebc16bfd6724123f63b2bf8b0b73226d62d40b52699c2f1fe174a85e30f01a0b.scope: Deactivated successfully.
Dec  3 19:07:40 compute-0 mystifying_chaplygin[461671]: 167 167
Dec  3 19:07:40 compute-0 conmon[461671]: conmon ebc16bfd6724123f63b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebc16bfd6724123f63b2bf8b0b73226d62d40b52699c2f1fe174a85e30f01a0b.scope/container/memory.events
Dec  3 19:07:40 compute-0 podman[461655]: 2025-12-03 19:07:40.93628711 +0000 UTC m=+0.176193380 container died ebc16bfd6724123f63b2bf8b0b73226d62d40b52699c2f1fe174a85e30f01a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 19:07:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1db6decbef1d23156112c5804aa50024b2f23f184c7fb3c0cabcdf763103559d-merged.mount: Deactivated successfully.
Dec  3 19:07:41 compute-0 podman[461655]: 2025-12-03 19:07:40.999937724 +0000 UTC m=+0.239843984 container remove ebc16bfd6724123f63b2bf8b0b73226d62d40b52699c2f1fe174a85e30f01a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 19:07:41 compute-0 systemd[1]: libpod-conmon-ebc16bfd6724123f63b2bf8b0b73226d62d40b52699c2f1fe174a85e30f01a0b.scope: Deactivated successfully.
Dec  3 19:07:41 compute-0 podman[461693]: 2025-12-03 19:07:41.205725187 +0000 UTC m=+0.057620057 container create 4eb17a20913437705e0562e43752a78e1d9420a4c2be05a166283e3968e164fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_grothendieck, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 19:07:41 compute-0 podman[461693]: 2025-12-03 19:07:41.185036472 +0000 UTC m=+0.036931382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:41 compute-0 systemd[1]: Started libpod-conmon-4eb17a20913437705e0562e43752a78e1d9420a4c2be05a166283e3968e164fa.scope.
Dec  3 19:07:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d78b5f87c87fb3b409bf5175e14d24b81620126241a102f7300791cece2b366/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d78b5f87c87fb3b409bf5175e14d24b81620126241a102f7300791cece2b366/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d78b5f87c87fb3b409bf5175e14d24b81620126241a102f7300791cece2b366/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d78b5f87c87fb3b409bf5175e14d24b81620126241a102f7300791cece2b366/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d78b5f87c87fb3b409bf5175e14d24b81620126241a102f7300791cece2b366/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:41 compute-0 podman[461693]: 2025-12-03 19:07:41.331747953 +0000 UTC m=+0.183642863 container init 4eb17a20913437705e0562e43752a78e1d9420a4c2be05a166283e3968e164fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 19:07:41 compute-0 podman[461693]: 2025-12-03 19:07:41.344895164 +0000 UTC m=+0.196790034 container start 4eb17a20913437705e0562e43752a78e1d9420a4c2be05a166283e3968e164fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:07:41 compute-0 podman[461693]: 2025-12-03 19:07:41.349863385 +0000 UTC m=+0.201758245 container attach 4eb17a20913437705e0562e43752a78e1d9420a4c2be05a166283e3968e164fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 19:07:41 compute-0 nova_compute[348325]: 2025-12-03 19:07:41.986 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:42 compute-0 stoic_grothendieck[461709]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:07:42 compute-0 stoic_grothendieck[461709]: --> relative data size: 1.0
Dec  3 19:07:42 compute-0 stoic_grothendieck[461709]: --> All data devices are unavailable
Dec  3 19:07:42 compute-0 systemd[1]: libpod-4eb17a20913437705e0562e43752a78e1d9420a4c2be05a166283e3968e164fa.scope: Deactivated successfully.
Dec  3 19:07:42 compute-0 systemd[1]: libpod-4eb17a20913437705e0562e43752a78e1d9420a4c2be05a166283e3968e164fa.scope: Consumed 1.151s CPU time.
Dec  3 19:07:42 compute-0 podman[461738]: 2025-12-03 19:07:42.644213637 +0000 UTC m=+0.044885476 container died 4eb17a20913437705e0562e43752a78e1d9420a4c2be05a166283e3968e164fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_grothendieck, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d78b5f87c87fb3b409bf5175e14d24b81620126241a102f7300791cece2b366-merged.mount: Deactivated successfully.
Dec  3 19:07:42 compute-0 podman[461738]: 2025-12-03 19:07:42.723724838 +0000 UTC m=+0.124396667 container remove 4eb17a20913437705e0562e43752a78e1d9420a4c2be05a166283e3968e164fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 19:07:42 compute-0 systemd[1]: libpod-conmon-4eb17a20913437705e0562e43752a78e1d9420a4c2be05a166283e3968e164fa.scope: Deactivated successfully.
Dec  3 19:07:43 compute-0 nova_compute[348325]: 2025-12-03 19:07:43.692 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:43 compute-0 podman[461890]: 2025-12-03 19:07:43.696781268 +0000 UTC m=+0.065943930 container create ca0f59f8356ad6aa51019e30e7defeb87314277056a7a5862b2aa8f71a37f7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:07:43 compute-0 podman[461890]: 2025-12-03 19:07:43.667629006 +0000 UTC m=+0.036791708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:43 compute-0 systemd[1]: Started libpod-conmon-ca0f59f8356ad6aa51019e30e7defeb87314277056a7a5862b2aa8f71a37f7ae.scope.
Dec  3 19:07:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:43 compute-0 podman[461890]: 2025-12-03 19:07:43.81401288 +0000 UTC m=+0.183175592 container init ca0f59f8356ad6aa51019e30e7defeb87314277056a7a5862b2aa8f71a37f7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goldberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 19:07:43 compute-0 podman[461890]: 2025-12-03 19:07:43.823172563 +0000 UTC m=+0.192335225 container start ca0f59f8356ad6aa51019e30e7defeb87314277056a7a5862b2aa8f71a37f7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:07:43 compute-0 podman[461890]: 2025-12-03 19:07:43.82958582 +0000 UTC m=+0.198748532 container attach ca0f59f8356ad6aa51019e30e7defeb87314277056a7a5862b2aa8f71a37f7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 19:07:43 compute-0 youthful_goldberg[461907]: 167 167
Dec  3 19:07:43 compute-0 systemd[1]: libpod-ca0f59f8356ad6aa51019e30e7defeb87314277056a7a5862b2aa8f71a37f7ae.scope: Deactivated successfully.
Dec  3 19:07:43 compute-0 podman[461890]: 2025-12-03 19:07:43.832253155 +0000 UTC m=+0.201415907 container died ca0f59f8356ad6aa51019e30e7defeb87314277056a7a5862b2aa8f71a37f7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goldberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:07:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-71c47e7d611d1ab1c378ff8d4a4b8b0ba86f162d7a1c0c92c66ef1dcbd4c11c3-merged.mount: Deactivated successfully.
Dec  3 19:07:43 compute-0 podman[461890]: 2025-12-03 19:07:43.895246113 +0000 UTC m=+0.264408775 container remove ca0f59f8356ad6aa51019e30e7defeb87314277056a7a5862b2aa8f71a37f7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goldberg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 19:07:43 compute-0 systemd[1]: libpod-conmon-ca0f59f8356ad6aa51019e30e7defeb87314277056a7a5862b2aa8f71a37f7ae.scope: Deactivated successfully.
Dec  3 19:07:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:07:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:07:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:07:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:07:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:07:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:07:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:44 compute-0 podman[461929]: 2025-12-03 19:07:44.116027341 +0000 UTC m=+0.069775054 container create 40a0f3a6ee8184161a037d1a913bd8bc4fe6528b3317742d356380c890822fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 19:07:44 compute-0 systemd[1]: Started libpod-conmon-40a0f3a6ee8184161a037d1a913bd8bc4fe6528b3317742d356380c890822fdc.scope.
Dec  3 19:07:44 compute-0 podman[461929]: 2025-12-03 19:07:44.093164113 +0000 UTC m=+0.046911846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc53b4577e891a8ee116aedbfd0a4da822e7c537189256d13fb26300f7776425/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc53b4577e891a8ee116aedbfd0a4da822e7c537189256d13fb26300f7776425/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc53b4577e891a8ee116aedbfd0a4da822e7c537189256d13fb26300f7776425/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc53b4577e891a8ee116aedbfd0a4da822e7c537189256d13fb26300f7776425/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:44 compute-0 podman[461929]: 2025-12-03 19:07:44.255809962 +0000 UTC m=+0.209557705 container init 40a0f3a6ee8184161a037d1a913bd8bc4fe6528b3317742d356380c890822fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 19:07:44 compute-0 podman[461929]: 2025-12-03 19:07:44.267844917 +0000 UTC m=+0.221592610 container start 40a0f3a6ee8184161a037d1a913bd8bc4fe6528b3317742d356380c890822fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:07:44 compute-0 podman[461929]: 2025-12-03 19:07:44.272648263 +0000 UTC m=+0.226395986 container attach 40a0f3a6ee8184161a037d1a913bd8bc4fe6528b3317742d356380c890822fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 19:07:44 compute-0 nova_compute[348325]: 2025-12-03 19:07:44.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:07:44 compute-0 nova_compute[348325]: 2025-12-03 19:07:44.532 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:07:44 compute-0 nova_compute[348325]: 2025-12-03 19:07:44.533 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:07:44 compute-0 nova_compute[348325]: 2025-12-03 19:07:44.534 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:07:44 compute-0 nova_compute[348325]: 2025-12-03 19:07:44.534 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:07:44 compute-0 nova_compute[348325]: 2025-12-03 19:07:44.535 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:07:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:44 compute-0 podman[461971]: 2025-12-03 19:07:44.905373027 +0000 UTC m=+0.069307642 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 19:07:45 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:07:45 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/647050293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.057 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:07:45 compute-0 gallant_golick[461946]: {
Dec  3 19:07:45 compute-0 gallant_golick[461946]:    "0": [
Dec  3 19:07:45 compute-0 gallant_golick[461946]:        {
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "devices": [
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "/dev/loop3"
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            ],
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_name": "ceph_lv0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_size": "21470642176",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "name": "ceph_lv0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "tags": {
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.cluster_name": "ceph",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.crush_device_class": "",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.encrypted": "0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.osd_id": "0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.type": "block",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.vdo": "0"
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            },
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "type": "block",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "vg_name": "ceph_vg0"
Dec  3 19:07:45 compute-0 gallant_golick[461946]:        }
Dec  3 19:07:45 compute-0 gallant_golick[461946]:    ],
Dec  3 19:07:45 compute-0 gallant_golick[461946]:    "1": [
Dec  3 19:07:45 compute-0 gallant_golick[461946]:        {
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "devices": [
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "/dev/loop4"
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            ],
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_name": "ceph_lv1",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_size": "21470642176",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "name": "ceph_lv1",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "tags": {
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.cluster_name": "ceph",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.crush_device_class": "",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.encrypted": "0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.osd_id": "1",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.type": "block",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.vdo": "0"
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            },
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "type": "block",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "vg_name": "ceph_vg1"
Dec  3 19:07:45 compute-0 gallant_golick[461946]:        }
Dec  3 19:07:45 compute-0 gallant_golick[461946]:    ],
Dec  3 19:07:45 compute-0 gallant_golick[461946]:    "2": [
Dec  3 19:07:45 compute-0 gallant_golick[461946]:        {
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "devices": [
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "/dev/loop5"
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            ],
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_name": "ceph_lv2",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_size": "21470642176",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "name": "ceph_lv2",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "tags": {
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.cluster_name": "ceph",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.crush_device_class": "",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.encrypted": "0",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.osd_id": "2",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.type": "block",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:                "ceph.vdo": "0"
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            },
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "type": "block",
Dec  3 19:07:45 compute-0 gallant_golick[461946]:            "vg_name": "ceph_vg2"
Dec  3 19:07:45 compute-0 gallant_golick[461946]:        }
Dec  3 19:07:45 compute-0 gallant_golick[461946]:    ]
Dec  3 19:07:45 compute-0 gallant_golick[461946]: }
Dec  3 19:07:45 compute-0 systemd[1]: libpod-40a0f3a6ee8184161a037d1a913bd8bc4fe6528b3317742d356380c890822fdc.scope: Deactivated successfully.
Dec  3 19:07:45 compute-0 podman[461929]: 2025-12-03 19:07:45.103818841 +0000 UTC m=+1.057566564 container died 40a0f3a6ee8184161a037d1a913bd8bc4fe6528b3317742d356380c890822fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 19:07:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc53b4577e891a8ee116aedbfd0a4da822e7c537189256d13fb26300f7776425-merged.mount: Deactivated successfully.
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.158 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.159 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.174 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.175 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:07:45 compute-0 podman[461929]: 2025-12-03 19:07:45.188758394 +0000 UTC m=+1.142506087 container remove 40a0f3a6ee8184161a037d1a913bd8bc4fe6528b3317742d356380c890822fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:07:45 compute-0 systemd[1]: libpod-conmon-40a0f3a6ee8184161a037d1a913bd8bc4fe6528b3317742d356380c890822fdc.scope: Deactivated successfully.
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.679 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.681 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3489MB free_disk=59.89718246459961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.682 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.682 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.800 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.800 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a364994c-8442-4a4c-bd6b-f3a2d31e4483 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.801 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.802 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.824 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing inventories for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.843 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating ProviderTree inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.844 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.861 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing aggregate associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.882 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing trait associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, traits: COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_AVX2,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 19:07:45 compute-0 nova_compute[348325]: 2025-12-03 19:07:45.941 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:07:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:46 compute-0 podman[462167]: 2025-12-03 19:07:46.186058726 +0000 UTC m=+0.067730065 container create e58d762cf91ff95d4ba228cf051dfef803c069a238043e2f7e5a55ed9c3e1b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_greider, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:07:46 compute-0 systemd[1]: Started libpod-conmon-e58d762cf91ff95d4ba228cf051dfef803c069a238043e2f7e5a55ed9c3e1b6a.scope.
Dec  3 19:07:46 compute-0 podman[462167]: 2025-12-03 19:07:46.162358487 +0000 UTC m=+0.044029846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:46 compute-0 podman[462167]: 2025-12-03 19:07:46.289672285 +0000 UTC m=+0.171343614 container init e58d762cf91ff95d4ba228cf051dfef803c069a238043e2f7e5a55ed9c3e1b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:07:46 compute-0 podman[462167]: 2025-12-03 19:07:46.299379842 +0000 UTC m=+0.181051151 container start e58d762cf91ff95d4ba228cf051dfef803c069a238043e2f7e5a55ed9c3e1b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_greider, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 19:07:46 compute-0 podman[462167]: 2025-12-03 19:07:46.304669741 +0000 UTC m=+0.186341050 container attach e58d762cf91ff95d4ba228cf051dfef803c069a238043e2f7e5a55ed9c3e1b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 19:07:46 compute-0 tender_greider[462183]: 167 167
Dec  3 19:07:46 compute-0 systemd[1]: libpod-e58d762cf91ff95d4ba228cf051dfef803c069a238043e2f7e5a55ed9c3e1b6a.scope: Deactivated successfully.
Dec  3 19:07:46 compute-0 podman[462167]: 2025-12-03 19:07:46.307611963 +0000 UTC m=+0.189283262 container died e58d762cf91ff95d4ba228cf051dfef803c069a238043e2f7e5a55ed9c3e1b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 19:07:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9343bdd5ecaf739c128582814f368868192f1cd8359b62db9b7716af6ab1a990-merged.mount: Deactivated successfully.
Dec  3 19:07:46 compute-0 podman[462167]: 2025-12-03 19:07:46.357843869 +0000 UTC m=+0.239515178 container remove e58d762cf91ff95d4ba228cf051dfef803c069a238043e2f7e5a55ed9c3e1b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_greider, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 19:07:46 compute-0 systemd[1]: libpod-conmon-e58d762cf91ff95d4ba228cf051dfef803c069a238043e2f7e5a55ed9c3e1b6a.scope: Deactivated successfully.
Dec  3 19:07:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:07:46 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4214983276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:07:46 compute-0 nova_compute[348325]: 2025-12-03 19:07:46.450 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:07:46 compute-0 nova_compute[348325]: 2025-12-03 19:07:46.461 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:07:46 compute-0 nova_compute[348325]: 2025-12-03 19:07:46.488 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:07:46 compute-0 nova_compute[348325]: 2025-12-03 19:07:46.490 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:07:46 compute-0 nova_compute[348325]: 2025-12-03 19:07:46.490 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:07:46 compute-0 podman[462208]: 2025-12-03 19:07:46.576286741 +0000 UTC m=+0.065298615 container create 0a8b2a3127630619b61c7d802ce1f83ee9a5b00fb9760d7dac52ca971dc18c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 19:07:46 compute-0 podman[462208]: 2025-12-03 19:07:46.556179049 +0000 UTC m=+0.045190943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:07:46 compute-0 systemd[1]: Started libpod-conmon-0a8b2a3127630619b61c7d802ce1f83ee9a5b00fb9760d7dac52ca971dc18c53.scope.
Dec  3 19:07:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186a8f3d9d862aa97009ff5275cb773f6d6c2ac63128ad86b910124e83a8024/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186a8f3d9d862aa97009ff5275cb773f6d6c2ac63128ad86b910124e83a8024/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186a8f3d9d862aa97009ff5275cb773f6d6c2ac63128ad86b910124e83a8024/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186a8f3d9d862aa97009ff5275cb773f6d6c2ac63128ad86b910124e83a8024/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:07:46 compute-0 podman[462208]: 2025-12-03 19:07:46.727578063 +0000 UTC m=+0.216589957 container init 0a8b2a3127630619b61c7d802ce1f83ee9a5b00fb9760d7dac52ca971dc18c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chandrasekhar, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 19:07:46 compute-0 podman[462208]: 2025-12-03 19:07:46.738298974 +0000 UTC m=+0.227310848 container start 0a8b2a3127630619b61c7d802ce1f83ee9a5b00fb9760d7dac52ca971dc18c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chandrasekhar, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 19:07:46 compute-0 podman[462208]: 2025-12-03 19:07:46.742656161 +0000 UTC m=+0.231668055 container attach 0a8b2a3127630619b61c7d802ce1f83ee9a5b00fb9760d7dac52ca971dc18c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:07:46 compute-0 nova_compute[348325]: 2025-12-03 19:07:46.992 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]: {
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "osd_id": 1,
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "type": "bluestore"
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:    },
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "osd_id": 2,
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "type": "bluestore"
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:    },
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "osd_id": 0,
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:        "type": "bluestore"
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]:    }
Dec  3 19:07:47 compute-0 suspicious_chandrasekhar[462224]: }
Dec  3 19:07:47 compute-0 systemd[1]: libpod-0a8b2a3127630619b61c7d802ce1f83ee9a5b00fb9760d7dac52ca971dc18c53.scope: Deactivated successfully.
Dec  3 19:07:47 compute-0 systemd[1]: libpod-0a8b2a3127630619b61c7d802ce1f83ee9a5b00fb9760d7dac52ca971dc18c53.scope: Consumed 1.091s CPU time.
Dec  3 19:07:47 compute-0 podman[462208]: 2025-12-03 19:07:47.834348407 +0000 UTC m=+1.323360291 container died 0a8b2a3127630619b61c7d802ce1f83ee9a5b00fb9760d7dac52ca971dc18c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a186a8f3d9d862aa97009ff5275cb773f6d6c2ac63128ad86b910124e83a8024-merged.mount: Deactivated successfully.
Dec  3 19:07:47 compute-0 podman[462208]: 2025-12-03 19:07:47.927990782 +0000 UTC m=+1.417002676 container remove 0a8b2a3127630619b61c7d802ce1f83ee9a5b00fb9760d7dac52ca971dc18c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:07:47 compute-0 systemd[1]: libpod-conmon-0a8b2a3127630619b61c7d802ce1f83ee9a5b00fb9760d7dac52ca971dc18c53.scope: Deactivated successfully.
Dec  3 19:07:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:07:47 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:07:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:48 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 26938f3e-a472-4a7d-abfa-96dd66a225a8 does not exist
Dec  3 19:07:48 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 79fd2a69-ed43-420e-ad4e-1dcfe69b7a6c does not exist
Dec  3 19:07:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:48 compute-0 nova_compute[348325]: 2025-12-03 19:07:48.695 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:48 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:48 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:07:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:49 compute-0 podman[462321]: 2025-12-03 19:07:49.990042783 +0000 UTC m=+0.132417034 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 19:07:50 compute-0 podman[462320]: 2025-12-03 19:07:50.004847764 +0000 UTC m=+0.167873239 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 19:07:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:52 compute-0 nova_compute[348325]: 2025-12-03 19:07:51.998 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:53 compute-0 nova_compute[348325]: 2025-12-03 19:07:53.700 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:57 compute-0 nova_compute[348325]: 2025-12-03 19:07:57.001 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:07:58 compute-0 nova_compute[348325]: 2025-12-03 19:07:58.702 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:07:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:07:59 compute-0 podman[158200]: time="2025-12-03T19:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:07:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:07:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8650 "" "Go-http-client/1.1"
Dec  3 19:08:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:00 compute-0 podman[462363]: 2025-12-03 19:08:00.954151132 +0000 UTC m=+0.109025281 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-type=git, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, config_id=edpm, distribution-scope=public, build-date=2025-08-20T13:12:41, name=ubi9-minimal)
Dec  3 19:08:00 compute-0 podman[462361]: 2025-12-03 19:08:00.959706617 +0000 UTC m=+0.122637273 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 19:08:00 compute-0 podman[462362]: 2025-12-03 19:08:00.975409182 +0000 UTC m=+0.128381855 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 19:08:01 compute-0 openstack_network_exporter[365222]: ERROR   19:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:08:01 compute-0 openstack_network_exporter[365222]: ERROR   19:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:08:01 compute-0 openstack_network_exporter[365222]: ERROR   19:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:08:01 compute-0 openstack_network_exporter[365222]: ERROR   19:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:08:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:08:01 compute-0 openstack_network_exporter[365222]: ERROR   19:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:08:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:08:02 compute-0 nova_compute[348325]: 2025-12-03 19:08:02.005 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:03 compute-0 nova_compute[348325]: 2025-12-03 19:08:03.704 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:06 compute-0 podman[462425]: 2025-12-03 19:08:06.933930604 +0000 UTC m=+0.087654980 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  3 19:08:06 compute-0 podman[462424]: 2025-12-03 19:08:06.941693894 +0000 UTC m=+0.099218603 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, version=9.4, vcs-type=git)
Dec  3 19:08:06 compute-0 podman[462426]: 2025-12-03 19:08:06.94891957 +0000 UTC m=+0.087340992 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 19:08:07 compute-0 nova_compute[348325]: 2025-12-03 19:08:07.008 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:08 compute-0 nova_compute[348325]: 2025-12-03 19:08:08.707 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:12 compute-0 nova_compute[348325]: 2025-12-03 19:08:12.011 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:13 compute-0 nova_compute[348325]: 2025-12-03 19:08:13.711 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:08:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:08:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:08:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:08:14
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.meta', 'vms', 'images', 'default.rgw.control', 'volumes']
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:08:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:08:15 compute-0 podman[462479]: 2025-12-03 19:08:15.953679456 +0000 UTC m=+0.109030772 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 19:08:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:17 compute-0 nova_compute[348325]: 2025-12-03 19:08:17.016 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:18 compute-0 nova_compute[348325]: 2025-12-03 19:08:18.711 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:21 compute-0 podman[462505]: 2025-12-03 19:08:21.032342486 +0000 UTC m=+0.179392681 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:08:21 compute-0 podman[462504]: 2025-12-03 19:08:21.043694713 +0000 UTC m=+0.193340030 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  3 19:08:22 compute-0 nova_compute[348325]: 2025-12-03 19:08:22.020 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:22 compute-0 nova_compute[348325]: 2025-12-03 19:08:22.482 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:08:22 compute-0 nova_compute[348325]: 2025-12-03 19:08:22.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:08:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:08:23.365 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:08:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:08:23.366 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:08:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:08:23.367 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:08:23 compute-0 nova_compute[348325]: 2025-12-03 19:08:23.713 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015186096934622648 of space, bias 1.0, pg target 0.45558290803867946 quantized to 32 (current 32)
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:08:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:08:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:27 compute-0 nova_compute[348325]: 2025-12-03 19:08:27.024 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:27 compute-0 nova_compute[348325]: 2025-12-03 19:08:27.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:08:27 compute-0 nova_compute[348325]: 2025-12-03 19:08:27.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:08:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:28 compute-0 nova_compute[348325]: 2025-12-03 19:08:28.718 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:29 compute-0 nova_compute[348325]: 2025-12-03 19:08:29.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:08:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.702514) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788909702638, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1299, "num_deletes": 256, "total_data_size": 2034781, "memory_usage": 2059616, "flush_reason": "Manual Compaction"}
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788909722297, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 1982757, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41721, "largest_seqno": 43019, "table_properties": {"data_size": 1976576, "index_size": 3448, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12731, "raw_average_key_size": 19, "raw_value_size": 1964195, "raw_average_value_size": 3003, "num_data_blocks": 155, "num_entries": 654, "num_filter_entries": 654, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764788779, "oldest_key_time": 1764788779, "file_creation_time": 1764788909, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 19884 microseconds, and 11027 cpu microseconds.
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.722398) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 1982757 bytes OK
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.722424) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.725654) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.725678) EVENT_LOG_v1 {"time_micros": 1764788909725671, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.725700) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2028912, prev total WAL file size 2028912, number of live WAL files 2.
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.727586) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353036' seq:72057594037927935, type:22 .. '6C6F676D0031373538' seq:0, type:0; will stop at (end)
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(1936KB)], [98(7785KB)]
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788909727769, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 9955177, "oldest_snapshot_seqno": -1}
Dec  3 19:08:29 compute-0 podman[158200]: time="2025-12-03T19:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:08:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:08:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 5925 keys, 9850228 bytes, temperature: kUnknown
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788909825136, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 9850228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9809905, "index_size": 24405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 153761, "raw_average_key_size": 25, "raw_value_size": 9702067, "raw_average_value_size": 1637, "num_data_blocks": 980, "num_entries": 5925, "num_filter_entries": 5925, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764788909, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.825931) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 9850228 bytes
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.828635) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 102.0 rd, 101.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(10.0) write-amplify(5.0) OK, records in: 6449, records dropped: 524 output_compression: NoCompression
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.828665) EVENT_LOG_v1 {"time_micros": 1764788909828651, "job": 58, "event": "compaction_finished", "compaction_time_micros": 97574, "compaction_time_cpu_micros": 52722, "output_level": 6, "num_output_files": 1, "total_output_size": 9850228, "num_input_records": 6449, "num_output_records": 5925, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788909829893, "job": 58, "event": "table_file_deletion", "file_number": 100}
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764788909833270, "job": 58, "event": "table_file_deletion", "file_number": 98}
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.727136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.833543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.833550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.833552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.833555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:08:29 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:08:29.833557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:08:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:31 compute-0 openstack_network_exporter[365222]: ERROR   19:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:08:31 compute-0 openstack_network_exporter[365222]: ERROR   19:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:08:31 compute-0 openstack_network_exporter[365222]: ERROR   19:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:08:31 compute-0 openstack_network_exporter[365222]: ERROR   19:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:08:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:08:31 compute-0 openstack_network_exporter[365222]: ERROR   19:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:08:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:08:31 compute-0 nova_compute[348325]: 2025-12-03 19:08:31.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:08:31 compute-0 podman[462547]: 2025-12-03 19:08:31.92004824 +0000 UTC m=+0.079386009 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 19:08:31 compute-0 podman[462548]: 2025-12-03 19:08:31.923258218 +0000 UTC m=+0.076023446 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 19:08:31 compute-0 podman[462550]: 2025-12-03 19:08:31.933313054 +0000 UTC m=+0.079168003 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec  3 19:08:32 compute-0 nova_compute[348325]: 2025-12-03 19:08:32.028 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:33 compute-0 nova_compute[348325]: 2025-12-03 19:08:33.718 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:34 compute-0 nova_compute[348325]: 2025-12-03 19:08:34.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:08:34 compute-0 nova_compute[348325]: 2025-12-03 19:08:34.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:08:34 compute-0 nova_compute[348325]: 2025-12-03 19:08:34.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:08:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:34 compute-0 nova_compute[348325]: 2025-12-03 19:08:34.794 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:08:34 compute-0 nova_compute[348325]: 2025-12-03 19:08:34.797 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:08:34 compute-0 nova_compute[348325]: 2025-12-03 19:08:34.798 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:08:34 compute-0 nova_compute[348325]: 2025-12-03 19:08:34.799 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:08:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:36 compute-0 nova_compute[348325]: 2025-12-03 19:08:36.427 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updating instance_info_cache with network_info: [{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:08:36 compute-0 nova_compute[348325]: 2025-12-03 19:08:36.442 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:08:36 compute-0 nova_compute[348325]: 2025-12-03 19:08:36.442 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:08:37 compute-0 nova_compute[348325]: 2025-12-03 19:08:37.031 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:08:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2886796370' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:08:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:08:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2886796370' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:08:37 compute-0 podman[462609]: 2025-12-03 19:08:37.979651382 +0000 UTC m=+0.128391445 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:08:37 compute-0 podman[462608]: 2025-12-03 19:08:37.986189851 +0000 UTC m=+0.138376269 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_id=edpm)
Dec  3 19:08:37 compute-0 podman[462610]: 2025-12-03 19:08:37.986975091 +0000 UTC m=+0.128536269 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec  3 19:08:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:38 compute-0 nova_compute[348325]: 2025-12-03 19:08:38.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:08:38 compute-0 nova_compute[348325]: 2025-12-03 19:08:38.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:08:38 compute-0 nova_compute[348325]: 2025-12-03 19:08:38.724 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:42 compute-0 nova_compute[348325]: 2025-12-03 19:08:42.036 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:43 compute-0 nova_compute[348325]: 2025-12-03 19:08:43.725 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:08:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:08:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:08:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:08:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:08:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:08:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:44 compute-0 nova_compute[348325]: 2025-12-03 19:08:44.480 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:08:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:45 compute-0 nova_compute[348325]: 2025-12-03 19:08:45.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:08:45 compute-0 nova_compute[348325]: 2025-12-03 19:08:45.520 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:08:45 compute-0 nova_compute[348325]: 2025-12-03 19:08:45.525 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:08:45 compute-0 nova_compute[348325]: 2025-12-03 19:08:45.525 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:08:45 compute-0 nova_compute[348325]: 2025-12-03 19:08:45.526 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:08:45 compute-0 nova_compute[348325]: 2025-12-03 19:08:45.527 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:08:46 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:08:46 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/310611249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.047 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:08:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.188 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.189 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.200 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.201 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:08:46 compute-0 podman[462685]: 2025-12-03 19:08:46.641543697 +0000 UTC m=+0.118106623 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.675 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.677 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3502MB free_disk=59.89718246459961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.677 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.677 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.782 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.782 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a364994c-8442-4a4c-bd6b-f3a2d31e4483 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.783 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.783 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:08:46 compute-0 nova_compute[348325]: 2025-12-03 19:08:46.842 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:08:47 compute-0 nova_compute[348325]: 2025-12-03 19:08:47.040 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:08:47 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288698004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:08:47 compute-0 nova_compute[348325]: 2025-12-03 19:08:47.294 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:08:47 compute-0 nova_compute[348325]: 2025-12-03 19:08:47.305 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:08:47 compute-0 nova_compute[348325]: 2025-12-03 19:08:47.330 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:08:47 compute-0 nova_compute[348325]: 2025-12-03 19:08:47.332 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:08:47 compute-0 nova_compute[348325]: 2025-12-03 19:08:47.333 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:08:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:48 compute-0 nova_compute[348325]: 2025-12-03 19:08:48.725 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 19:08:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 19:08:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:08:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:08:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:08:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:08:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:08:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:08:49 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 220f3e7f-4d6e-46be-aa22-182f409299a6 does not exist
Dec  3 19:08:49 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev c4a5a937-c05b-4106-89e7-edd40d2e5c5e does not exist
Dec  3 19:08:49 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev bbd0b8db-0db6-4ce1-ad5f-0827ab0f7156 does not exist
Dec  3 19:08:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:08:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:08:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:08:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:08:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:08:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:08:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 19:08:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:08:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:08:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:08:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:50 compute-0 podman[463004]: 2025-12-03 19:08:50.114499924 +0000 UTC m=+0.088863460 container create fabc5abc265ad6d69ef60cb58a16732ce0b9fa2fcacbc6485f86bf9984ccfa03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mendeleev, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 19:08:50 compute-0 systemd[1]: Started libpod-conmon-fabc5abc265ad6d69ef60cb58a16732ce0b9fa2fcacbc6485f86bf9984ccfa03.scope.
Dec  3 19:08:50 compute-0 podman[463004]: 2025-12-03 19:08:50.09302205 +0000 UTC m=+0.067385616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:08:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:08:50 compute-0 podman[463004]: 2025-12-03 19:08:50.259426832 +0000 UTC m=+0.233790448 container init fabc5abc265ad6d69ef60cb58a16732ce0b9fa2fcacbc6485f86bf9984ccfa03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mendeleev, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:08:50 compute-0 podman[463004]: 2025-12-03 19:08:50.270504462 +0000 UTC m=+0.244868008 container start fabc5abc265ad6d69ef60cb58a16732ce0b9fa2fcacbc6485f86bf9984ccfa03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mendeleev, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:08:50 compute-0 podman[463004]: 2025-12-03 19:08:50.275857623 +0000 UTC m=+0.250221249 container attach fabc5abc265ad6d69ef60cb58a16732ce0b9fa2fcacbc6485f86bf9984ccfa03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 19:08:50 compute-0 gallant_mendeleev[463020]: 167 167
Dec  3 19:08:50 compute-0 systemd[1]: libpod-fabc5abc265ad6d69ef60cb58a16732ce0b9fa2fcacbc6485f86bf9984ccfa03.scope: Deactivated successfully.
Dec  3 19:08:50 compute-0 podman[463004]: 2025-12-03 19:08:50.28230367 +0000 UTC m=+0.256667206 container died fabc5abc265ad6d69ef60cb58a16732ce0b9fa2fcacbc6485f86bf9984ccfa03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 19:08:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca3895e47ce9eadab760d34e6f8dfa0216cf4567027028be0ba5369d8a124977-merged.mount: Deactivated successfully.
Dec  3 19:08:50 compute-0 podman[463004]: 2025-12-03 19:08:50.339273421 +0000 UTC m=+0.313636947 container remove fabc5abc265ad6d69ef60cb58a16732ce0b9fa2fcacbc6485f86bf9984ccfa03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 19:08:50 compute-0 systemd[1]: libpod-conmon-fabc5abc265ad6d69ef60cb58a16732ce0b9fa2fcacbc6485f86bf9984ccfa03.scope: Deactivated successfully.
Dec  3 19:08:50 compute-0 podman[463043]: 2025-12-03 19:08:50.558493242 +0000 UTC m=+0.055612819 container create 9773ffced692ed5fb34c7e1d4b665c68baec2fbae7f8c56ccbeea0da3b67db41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 19:08:50 compute-0 systemd[1]: Started libpod-conmon-9773ffced692ed5fb34c7e1d4b665c68baec2fbae7f8c56ccbeea0da3b67db41.scope.
Dec  3 19:08:50 compute-0 podman[463043]: 2025-12-03 19:08:50.536836823 +0000 UTC m=+0.033956430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:08:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da160d3bae6b3abf62ac618399637652768543257225c11af28321b1c7a97bac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da160d3bae6b3abf62ac618399637652768543257225c11af28321b1c7a97bac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da160d3bae6b3abf62ac618399637652768543257225c11af28321b1c7a97bac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da160d3bae6b3abf62ac618399637652768543257225c11af28321b1c7a97bac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da160d3bae6b3abf62ac618399637652768543257225c11af28321b1c7a97bac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:50 compute-0 podman[463043]: 2025-12-03 19:08:50.675017836 +0000 UTC m=+0.172137423 container init 9773ffced692ed5fb34c7e1d4b665c68baec2fbae7f8c56ccbeea0da3b67db41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 19:08:50 compute-0 podman[463043]: 2025-12-03 19:08:50.692708996 +0000 UTC m=+0.189828563 container start 9773ffced692ed5fb34c7e1d4b665c68baec2fbae7f8c56ccbeea0da3b67db41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 19:08:50 compute-0 podman[463043]: 2025-12-03 19:08:50.697543855 +0000 UTC m=+0.194663442 container attach 9773ffced692ed5fb34c7e1d4b665c68baec2fbae7f8c56ccbeea0da3b67db41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 19:08:51 compute-0 gifted_aryabhata[463060]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:08:51 compute-0 gifted_aryabhata[463060]: --> relative data size: 1.0
Dec  3 19:08:51 compute-0 gifted_aryabhata[463060]: --> All data devices are unavailable
Dec  3 19:08:51 compute-0 podman[463043]: 2025-12-03 19:08:51.921863737 +0000 UTC m=+1.418983324 container died 9773ffced692ed5fb34c7e1d4b665c68baec2fbae7f8c56ccbeea0da3b67db41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:08:51 compute-0 systemd[1]: libpod-9773ffced692ed5fb34c7e1d4b665c68baec2fbae7f8c56ccbeea0da3b67db41.scope: Deactivated successfully.
Dec  3 19:08:51 compute-0 systemd[1]: libpod-9773ffced692ed5fb34c7e1d4b665c68baec2fbae7f8c56ccbeea0da3b67db41.scope: Consumed 1.139s CPU time.
Dec  3 19:08:51 compute-0 podman[463086]: 2025-12-03 19:08:51.941257541 +0000 UTC m=+0.096586279 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:08:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-da160d3bae6b3abf62ac618399637652768543257225c11af28321b1c7a97bac-merged.mount: Deactivated successfully.
Dec  3 19:08:52 compute-0 podman[463043]: 2025-12-03 19:08:52.010662565 +0000 UTC m=+1.507782132 container remove 9773ffced692ed5fb34c7e1d4b665c68baec2fbae7f8c56ccbeea0da3b67db41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 19:08:52 compute-0 podman[463085]: 2025-12-03 19:08:52.034869626 +0000 UTC m=+0.188957633 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 19:08:52 compute-0 systemd[1]: libpod-conmon-9773ffced692ed5fb34c7e1d4b665c68baec2fbae7f8c56ccbeea0da3b67db41.scope: Deactivated successfully.
Dec  3 19:08:52 compute-0 nova_compute[348325]: 2025-12-03 19:08:52.042 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:52 compute-0 podman[463281]: 2025-12-03 19:08:52.828689941 +0000 UTC m=+0.065798407 container create 62daf13664472ed267b365c033cc4470c31cbeadb35b7dda99de58929f6ea8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 19:08:52 compute-0 systemd[1]: Started libpod-conmon-62daf13664472ed267b365c033cc4470c31cbeadb35b7dda99de58929f6ea8e8.scope.
Dec  3 19:08:52 compute-0 podman[463281]: 2025-12-03 19:08:52.808040057 +0000 UTC m=+0.045148573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:08:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:08:52 compute-0 podman[463281]: 2025-12-03 19:08:52.955548428 +0000 UTC m=+0.192656974 container init 62daf13664472ed267b365c033cc4470c31cbeadb35b7dda99de58929f6ea8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 19:08:52 compute-0 podman[463281]: 2025-12-03 19:08:52.968780111 +0000 UTC m=+0.205888587 container start 62daf13664472ed267b365c033cc4470c31cbeadb35b7dda99de58929f6ea8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 19:08:52 compute-0 podman[463281]: 2025-12-03 19:08:52.974148862 +0000 UTC m=+0.211257368 container attach 62daf13664472ed267b365c033cc4470c31cbeadb35b7dda99de58929f6ea8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 19:08:52 compute-0 intelligent_gould[463297]: 167 167
Dec  3 19:08:52 compute-0 systemd[1]: libpod-62daf13664472ed267b365c033cc4470c31cbeadb35b7dda99de58929f6ea8e8.scope: Deactivated successfully.
Dec  3 19:08:52 compute-0 podman[463281]: 2025-12-03 19:08:52.983145821 +0000 UTC m=+0.220254327 container died 62daf13664472ed267b365c033cc4470c31cbeadb35b7dda99de58929f6ea8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6a5c6834b69fc4a4cf00dcfc3613363c3afcc1b55cf59a2ce8aaec5bb630eab-merged.mount: Deactivated successfully.
Dec  3 19:08:53 compute-0 podman[463281]: 2025-12-03 19:08:53.072688127 +0000 UTC m=+0.309796633 container remove 62daf13664472ed267b365c033cc4470c31cbeadb35b7dda99de58929f6ea8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 19:08:53 compute-0 systemd[1]: libpod-conmon-62daf13664472ed267b365c033cc4470c31cbeadb35b7dda99de58929f6ea8e8.scope: Deactivated successfully.
Dec  3 19:08:53 compute-0 podman[463321]: 2025-12-03 19:08:53.363807122 +0000 UTC m=+0.091110715 container create 291ae16c68bc53863ea22154167eec02cc3d7a8b1b1b9fcc9f3b054e50b3d491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shamir, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 19:08:53 compute-0 podman[463321]: 2025-12-03 19:08:53.300959438 +0000 UTC m=+0.028263051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:08:53 compute-0 systemd[1]: Started libpod-conmon-291ae16c68bc53863ea22154167eec02cc3d7a8b1b1b9fcc9f3b054e50b3d491.scope.
Dec  3 19:08:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e4fa98ab6dab765c29ba0acd2d89ba2327e150ec84a85283de2fe88a8dbbbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e4fa98ab6dab765c29ba0acd2d89ba2327e150ec84a85283de2fe88a8dbbbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e4fa98ab6dab765c29ba0acd2d89ba2327e150ec84a85283de2fe88a8dbbbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e4fa98ab6dab765c29ba0acd2d89ba2327e150ec84a85283de2fe88a8dbbbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:53 compute-0 podman[463321]: 2025-12-03 19:08:53.474648267 +0000 UTC m=+0.201951880 container init 291ae16c68bc53863ea22154167eec02cc3d7a8b1b1b9fcc9f3b054e50b3d491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shamir, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 19:08:53 compute-0 podman[463321]: 2025-12-03 19:08:53.499941585 +0000 UTC m=+0.227245178 container start 291ae16c68bc53863ea22154167eec02cc3d7a8b1b1b9fcc9f3b054e50b3d491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 19:08:53 compute-0 podman[463321]: 2025-12-03 19:08:53.50385756 +0000 UTC m=+0.231161173 container attach 291ae16c68bc53863ea22154167eec02cc3d7a8b1b1b9fcc9f3b054e50b3d491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 19:08:53 compute-0 nova_compute[348325]: 2025-12-03 19:08:53.728 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:54 compute-0 priceless_shamir[463337]: {
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:    "0": [
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:        {
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "devices": [
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "/dev/loop3"
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            ],
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_name": "ceph_lv0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_size": "21470642176",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "name": "ceph_lv0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "tags": {
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.cluster_name": "ceph",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.crush_device_class": "",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.encrypted": "0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.osd_id": "0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.type": "block",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.vdo": "0"
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            },
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "type": "block",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "vg_name": "ceph_vg0"
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:        }
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:    ],
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:    "1": [
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:        {
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "devices": [
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "/dev/loop4"
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            ],
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_name": "ceph_lv1",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_size": "21470642176",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "name": "ceph_lv1",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "tags": {
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.cluster_name": "ceph",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.crush_device_class": "",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.encrypted": "0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.osd_id": "1",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.type": "block",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.vdo": "0"
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            },
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "type": "block",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "vg_name": "ceph_vg1"
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:        }
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:    ],
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:    "2": [
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:        {
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "devices": [
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "/dev/loop5"
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            ],
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_name": "ceph_lv2",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_size": "21470642176",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "name": "ceph_lv2",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "tags": {
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.cluster_name": "ceph",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.crush_device_class": "",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.encrypted": "0",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.osd_id": "2",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.type": "block",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:                "ceph.vdo": "0"
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            },
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "type": "block",
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:            "vg_name": "ceph_vg2"
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:        }
Dec  3 19:08:54 compute-0 priceless_shamir[463337]:    ]
Dec  3 19:08:54 compute-0 priceless_shamir[463337]: }
Dec  3 19:08:54 compute-0 systemd[1]: libpod-291ae16c68bc53863ea22154167eec02cc3d7a8b1b1b9fcc9f3b054e50b3d491.scope: Deactivated successfully.
Dec  3 19:08:54 compute-0 podman[463321]: 2025-12-03 19:08:54.358851679 +0000 UTC m=+1.086155282 container died 291ae16c68bc53863ea22154167eec02cc3d7a8b1b1b9fcc9f3b054e50b3d491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shamir, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:08:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-87e4fa98ab6dab765c29ba0acd2d89ba2327e150ec84a85283de2fe88a8dbbbb-merged.mount: Deactivated successfully.
Dec  3 19:08:54 compute-0 podman[463321]: 2025-12-03 19:08:54.465751438 +0000 UTC m=+1.193055031 container remove 291ae16c68bc53863ea22154167eec02cc3d7a8b1b1b9fcc9f3b054e50b3d491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shamir, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:08:54 compute-0 systemd[1]: libpod-conmon-291ae16c68bc53863ea22154167eec02cc3d7a8b1b1b9fcc9f3b054e50b3d491.scope: Deactivated successfully.
Dec  3 19:08:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:55 compute-0 podman[463497]: 2025-12-03 19:08:55.464437244 +0000 UTC m=+0.060483227 container create b9c98397e48d0013c305aec6373991737c569394518363653322dc58768bbe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:08:55 compute-0 systemd[1]: Started libpod-conmon-b9c98397e48d0013c305aec6373991737c569394518363653322dc58768bbe32.scope.
Dec  3 19:08:55 compute-0 podman[463497]: 2025-12-03 19:08:55.442946079 +0000 UTC m=+0.038992082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:08:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:08:55 compute-0 podman[463497]: 2025-12-03 19:08:55.563946732 +0000 UTC m=+0.159992745 container init b9c98397e48d0013c305aec6373991737c569394518363653322dc58768bbe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:08:55 compute-0 podman[463497]: 2025-12-03 19:08:55.574221074 +0000 UTC m=+0.170267047 container start b9c98397e48d0013c305aec6373991737c569394518363653322dc58768bbe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 19:08:55 compute-0 podman[463497]: 2025-12-03 19:08:55.579114613 +0000 UTC m=+0.175160606 container attach b9c98397e48d0013c305aec6373991737c569394518363653322dc58768bbe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:08:55 compute-0 admiring_benz[463513]: 167 167
Dec  3 19:08:55 compute-0 systemd[1]: libpod-b9c98397e48d0013c305aec6373991737c569394518363653322dc58768bbe32.scope: Deactivated successfully.
Dec  3 19:08:55 compute-0 podman[463497]: 2025-12-03 19:08:55.582578588 +0000 UTC m=+0.178624561 container died b9c98397e48d0013c305aec6373991737c569394518363653322dc58768bbe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-abb114aef5f22964e034f3238667b7cf80a559a35a4019b17fc0fd2d31037445-merged.mount: Deactivated successfully.
Dec  3 19:08:55 compute-0 podman[463497]: 2025-12-03 19:08:55.643837833 +0000 UTC m=+0.239883806 container remove b9c98397e48d0013c305aec6373991737c569394518363653322dc58768bbe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:08:55 compute-0 systemd[1]: libpod-conmon-b9c98397e48d0013c305aec6373991737c569394518363653322dc58768bbe32.scope: Deactivated successfully.
Dec  3 19:08:55 compute-0 podman[463535]: 2025-12-03 19:08:55.849373089 +0000 UTC m=+0.062016664 container create cbf30f09cbe7e192b8a0b2d52ea89c4fd12514ae6b0b37bf9bec466348559c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lichterman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:08:55 compute-0 podman[463535]: 2025-12-03 19:08:55.811864714 +0000 UTC m=+0.024508259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:08:55 compute-0 systemd[1]: Started libpod-conmon-cbf30f09cbe7e192b8a0b2d52ea89c4fd12514ae6b0b37bf9bec466348559c62.scope.
Dec  3 19:08:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7b45ba852d51a741364c7b96812d1f8895d0dc40f216763b59db04a59dd064/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7b45ba852d51a741364c7b96812d1f8895d0dc40f216763b59db04a59dd064/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7b45ba852d51a741364c7b96812d1f8895d0dc40f216763b59db04a59dd064/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7b45ba852d51a741364c7b96812d1f8895d0dc40f216763b59db04a59dd064/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:08:56 compute-0 podman[463535]: 2025-12-03 19:08:56.003691896 +0000 UTC m=+0.216335471 container init cbf30f09cbe7e192b8a0b2d52ea89c4fd12514ae6b0b37bf9bec466348559c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:08:56 compute-0 podman[463535]: 2025-12-03 19:08:56.023686444 +0000 UTC m=+0.236330039 container start cbf30f09cbe7e192b8a0b2d52ea89c4fd12514ae6b0b37bf9bec466348559c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  3 19:08:56 compute-0 podman[463535]: 2025-12-03 19:08:56.030747177 +0000 UTC m=+0.243390732 container attach cbf30f09cbe7e192b8a0b2d52ea89c4fd12514ae6b0b37bf9bec466348559c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lichterman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 19:08:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:57 compute-0 nova_compute[348325]: 2025-12-03 19:08:57.045 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]: {
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "osd_id": 1,
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "type": "bluestore"
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:    },
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "osd_id": 2,
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "type": "bluestore"
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:    },
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "osd_id": 0,
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:        "type": "bluestore"
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]:    }
Dec  3 19:08:57 compute-0 exciting_lichterman[463551]: }
Dec  3 19:08:57 compute-0 systemd[1]: libpod-cbf30f09cbe7e192b8a0b2d52ea89c4fd12514ae6b0b37bf9bec466348559c62.scope: Deactivated successfully.
Dec  3 19:08:57 compute-0 podman[463535]: 2025-12-03 19:08:57.178714966 +0000 UTC m=+1.391358511 container died cbf30f09cbe7e192b8a0b2d52ea89c4fd12514ae6b0b37bf9bec466348559c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lichterman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:08:57 compute-0 systemd[1]: libpod-cbf30f09cbe7e192b8a0b2d52ea89c4fd12514ae6b0b37bf9bec466348559c62.scope: Consumed 1.155s CPU time.
Dec  3 19:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d7b45ba852d51a741364c7b96812d1f8895d0dc40f216763b59db04a59dd064-merged.mount: Deactivated successfully.
Dec  3 19:08:57 compute-0 podman[463535]: 2025-12-03 19:08:57.309076128 +0000 UTC m=+1.521719683 container remove cbf30f09cbe7e192b8a0b2d52ea89c4fd12514ae6b0b37bf9bec466348559c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:08:57 compute-0 systemd[1]: libpod-conmon-cbf30f09cbe7e192b8a0b2d52ea89c4fd12514ae6b0b37bf9bec466348559c62.scope: Deactivated successfully.
Dec  3 19:08:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:08:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:08:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:08:57 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:08:57 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 0e96ae64-cab3-48ed-b7f7-bf87dc548b15 does not exist
Dec  3 19:08:57 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3c3173fe-21f4-4f59-955f-5d86bc8bf01f does not exist
Dec  3 19:08:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:08:57 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:08:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:08:58 compute-0 nova_compute[348325]: 2025-12-03 19:08:58.730 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:08:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:08:59 compute-0 podman[158200]: time="2025-12-03T19:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:08:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:08:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8651 "" "Go-http-client/1.1"
Dec  3 19:09:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:01 compute-0 openstack_network_exporter[365222]: ERROR   19:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:09:01 compute-0 openstack_network_exporter[365222]: ERROR   19:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:09:01 compute-0 openstack_network_exporter[365222]: ERROR   19:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:09:01 compute-0 openstack_network_exporter[365222]: ERROR   19:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:09:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:09:01 compute-0 openstack_network_exporter[365222]: ERROR   19:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:09:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:09:02 compute-0 nova_compute[348325]: 2025-12-03 19:09:02.052 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:02 compute-0 podman[463647]: 2025-12-03 19:09:02.90739623 +0000 UTC m=+0.076794816 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 19:09:02 compute-0 podman[463648]: 2025-12-03 19:09:02.911553481 +0000 UTC m=+0.079771158 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, release=1755695350, architecture=x86_64, distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm)
Dec  3 19:09:02 compute-0 podman[463646]: 2025-12-03 19:09:02.953506445 +0000 UTC m=+0.116202177 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 19:09:03 compute-0 nova_compute[348325]: 2025-12-03 19:09:03.733 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:07 compute-0 nova_compute[348325]: 2025-12-03 19:09:07.054 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:08 compute-0 nova_compute[348325]: 2025-12-03 19:09:08.737 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:08 compute-0 podman[463710]: 2025-12-03 19:09:08.951643047 +0000 UTC m=+0.100458943 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 19:09:08 compute-0 podman[463711]: 2025-12-03 19:09:08.965852744 +0000 UTC m=+0.115826328 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:09:08 compute-0 podman[463709]: 2025-12-03 19:09:08.99027127 +0000 UTC m=+0.144735174 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0)
Dec  3 19:09:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:12 compute-0 nova_compute[348325]: 2025-12-03 19:09:12.058 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.256 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.257 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.257 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.258 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.259 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.260 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c235250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.266 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a4fc45c7-44e4-4b50-a3e0-98de13268f88', 'name': 'te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.269 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a364994c-8442-4a4c-bd6b-f3a2d31e4483', 'name': 'te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.270 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.270 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T19:09:13.270732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.276 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.280 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.281 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.281 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.281 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.282 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.282 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.282 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T19:09:13.282275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.283 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.283 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.284 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.284 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.284 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.285 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.285 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T19:09:13.285156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.285 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.286 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.286 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.287 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.287 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T19:09:13.287430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.288 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.289 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.290 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T19:09:13.290145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.290 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.291 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.291 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T19:09:13.292492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.307 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.307 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.318 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.318 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.319 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.319 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.320 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.321 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T19:09:13.320561) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.321 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.322 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T19:09:13.323000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.346 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/memory.usage volume: 42.515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.368 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/memory.usage volume: 43.6796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.369 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.370 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.370 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.372 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T19:09:13.371651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.372 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.373 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.374 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.375 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T19:09:13.375150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.375 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.376 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.376 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.377 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.378 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.378 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.379 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.379 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T19:09:13.379559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.425 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 30382592 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.426 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.475 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.bytes volume: 30284800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.475 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.476 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.476 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.476 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.476 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T19:09:13.477156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.477 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 1826201908 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.479 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 148336564 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.479 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.latency volume: 2339550092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.479 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.latency volume: 154099871 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.480 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.480 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.480 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.480 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.480 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.480 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.481 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.481 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.requests volume: 1098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.481 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T19:09:13.480673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.486 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.486 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.486 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.486 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.486 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.486 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.486 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.487 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.487 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T19:09:13.486221) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.487 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.488 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.489 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.489 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T19:09:13.488801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.489 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.490 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.490 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.491 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.491 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.492 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.492 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.492 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T19:09:13.491245) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.493 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 9240506883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.493 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.494 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.latency volume: 7366670257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.494 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T19:09:13.493228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.496 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.496 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.496 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T19:09:13.496367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.496 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 340 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.497 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.497 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.requests volume: 277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.497 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.498 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.498 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T19:09:13.498628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.499 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.500 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.500 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.500 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T19:09:13.500322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.501 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.501 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.502 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T19:09:13.501766) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.503 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T19:09:13.503546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.504 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.505 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.505 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T19:09:13.505186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.506 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T19:09:13.506943) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.507 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/cpu volume: 334240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.507 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/cpu volume: 285040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:09:13.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:09:13 compute-0 nova_compute[348325]: 2025-12-03 19:09:13.738 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:09:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:09:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:09:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:09:14
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'images', 'vms', 'default.rgw.meta']
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:09:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:09:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:16 compute-0 podman[463763]: 2025-12-03 19:09:16.927524161 +0000 UTC m=+0.100377551 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 19:09:17 compute-0 nova_compute[348325]: 2025-12-03 19:09:17.063 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:18 compute-0 nova_compute[348325]: 2025-12-03 19:09:18.740 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:22 compute-0 nova_compute[348325]: 2025-12-03 19:09:22.066 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:22 compute-0 podman[463789]: 2025-12-03 19:09:22.902541618 +0000 UTC m=+0.067871788 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125)
Dec  3 19:09:22 compute-0 podman[463788]: 2025-12-03 19:09:22.964701174 +0000 UTC m=+0.130755722 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:09:23 compute-0 nova_compute[348325]: 2025-12-03 19:09:23.324 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:09:23.366 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:09:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:09:23.367 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:09:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:09:23.368 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:09:23 compute-0 nova_compute[348325]: 2025-12-03 19:09:23.742 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:24 compute-0 nova_compute[348325]: 2025-12-03 19:09:24.485 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:09:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015186096934622648 of space, bias 1.0, pg target 0.45558290803867946 quantized to 32 (current 32)
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:09:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:09:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:27 compute-0 nova_compute[348325]: 2025-12-03 19:09:27.068 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:28 compute-0 nova_compute[348325]: 2025-12-03 19:09:28.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:28 compute-0 nova_compute[348325]: 2025-12-03 19:09:28.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:28 compute-0 nova_compute[348325]: 2025-12-03 19:09:28.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:28 compute-0 nova_compute[348325]: 2025-12-03 19:09:28.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 19:09:28 compute-0 nova_compute[348325]: 2025-12-03 19:09:28.513 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 19:09:28 compute-0 nova_compute[348325]: 2025-12-03 19:09:28.744 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:29 compute-0 podman[158200]: time="2025-12-03T19:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:09:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:09:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec  3 19:09:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:31 compute-0 openstack_network_exporter[365222]: ERROR   19:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:09:31 compute-0 openstack_network_exporter[365222]: ERROR   19:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:09:31 compute-0 openstack_network_exporter[365222]: ERROR   19:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:09:31 compute-0 openstack_network_exporter[365222]: ERROR   19:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:09:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:09:31 compute-0 openstack_network_exporter[365222]: ERROR   19:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:09:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:09:31 compute-0 nova_compute[348325]: 2025-12-03 19:09:31.513 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:31 compute-0 nova_compute[348325]: 2025-12-03 19:09:31.513 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:32 compute-0 nova_compute[348325]: 2025-12-03 19:09:32.070 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:33 compute-0 nova_compute[348325]: 2025-12-03 19:09:33.746 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:33 compute-0 podman[463834]: 2025-12-03 19:09:33.94192023 +0000 UTC m=+0.112064145 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 19:09:33 compute-0 podman[463836]: 2025-12-03 19:09:33.963316759 +0000 UTC m=+0.108920721 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, config_id=edpm)
Dec  3 19:09:33 compute-0 podman[463835]: 2025-12-03 19:09:33.969427344 +0000 UTC m=+0.117027563 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 19:09:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:35 compute-0 nova_compute[348325]: 2025-12-03 19:09:35.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:35 compute-0 nova_compute[348325]: 2025-12-03 19:09:35.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:09:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:36 compute-0 nova_compute[348325]: 2025-12-03 19:09:36.448 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:09:36 compute-0 nova_compute[348325]: 2025-12-03 19:09:36.449 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:09:36 compute-0 nova_compute[348325]: 2025-12-03 19:09:36.449 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:09:37 compute-0 nova_compute[348325]: 2025-12-03 19:09:37.073 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:09:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1927017845' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:09:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:09:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1927017845' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:09:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:38 compute-0 nova_compute[348325]: 2025-12-03 19:09:38.751 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:39 compute-0 nova_compute[348325]: 2025-12-03 19:09:39.501 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updating instance_info_cache with network_info: [{"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:09:39 compute-0 nova_compute[348325]: 2025-12-03 19:09:39.535 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:09:39 compute-0 nova_compute[348325]: 2025-12-03 19:09:39.536 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:09:39 compute-0 nova_compute[348325]: 2025-12-03 19:09:39.537 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:39 compute-0 nova_compute[348325]: 2025-12-03 19:09:39.538 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:09:39 compute-0 nova_compute[348325]: 2025-12-03 19:09:39.539 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:39 compute-0 podman[463893]: 2025-12-03 19:09:39.970976854 +0000 UTC m=+0.106098014 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:09:39 compute-0 podman[463894]: 2025-12-03 19:09:39.985371496 +0000 UTC m=+0.122825472 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 19:09:40 compute-0 podman[463892]: 2025-12-03 19:09:40.008980117 +0000 UTC m=+0.146304970 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, version=9.4, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec  3 19:09:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:42 compute-0 nova_compute[348325]: 2025-12-03 19:09:42.077 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:43 compute-0 nova_compute[348325]: 2025-12-03 19:09:43.753 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:09:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:09:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:09:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:09:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:09:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:09:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:46 compute-0 nova_compute[348325]: 2025-12-03 19:09:46.502 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:46 compute-0 nova_compute[348325]: 2025-12-03 19:09:46.502 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 19:09:47 compute-0 nova_compute[348325]: 2025-12-03 19:09:47.081 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:47 compute-0 nova_compute[348325]: 2025-12-03 19:09:47.513 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:09:47 compute-0 nova_compute[348325]: 2025-12-03 19:09:47.545 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:09:47 compute-0 nova_compute[348325]: 2025-12-03 19:09:47.546 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:09:47 compute-0 nova_compute[348325]: 2025-12-03 19:09:47.546 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:09:47 compute-0 nova_compute[348325]: 2025-12-03 19:09:47.547 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:09:47 compute-0 nova_compute[348325]: 2025-12-03 19:09:47.548 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:09:47 compute-0 podman[463961]: 2025-12-03 19:09:47.943961009 +0000 UTC m=+0.099280872 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:09:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:09:47 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2879878779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.013 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.089 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.090 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.095 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.095 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:09:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.451 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.453 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3498MB free_disk=59.89718246459961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.454 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.454 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.596 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.597 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a364994c-8442-4a4c-bd6b-f3a2d31e4483 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.597 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.598 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.738 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:09:48 compute-0 nova_compute[348325]: 2025-12-03 19:09:48.763 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:09:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305912217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:09:49 compute-0 nova_compute[348325]: 2025-12-03 19:09:49.185 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:09:49 compute-0 nova_compute[348325]: 2025-12-03 19:09:49.196 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:09:49 compute-0 nova_compute[348325]: 2025-12-03 19:09:49.216 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:09:49 compute-0 nova_compute[348325]: 2025-12-03 19:09:49.219 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:09:49 compute-0 nova_compute[348325]: 2025-12-03 19:09:49.220 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:09:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:52 compute-0 nova_compute[348325]: 2025-12-03 19:09:52.084 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:53 compute-0 nova_compute[348325]: 2025-12-03 19:09:53.757 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:53 compute-0 podman[464011]: 2025-12-03 19:09:53.932331375 +0000 UTC m=+0.095473381 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 19:09:53 compute-0 podman[464010]: 2025-12-03 19:09:53.973599266 +0000 UTC m=+0.137963381 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:09:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:57 compute-0 nova_compute[348325]: 2025-12-03 19:09:57.088 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:09:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:09:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:09:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:09:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:09:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:09:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:09:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 88baea80-dd0b-4124-8534-71d63d77804a does not exist
Dec  3 19:09:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 816aa17f-e80a-4432-b870-22a1464b4464 does not exist
Dec  3 19:09:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 0e02382f-2987-4a44-9904-b44cf356881c does not exist
Dec  3 19:09:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:09:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:09:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:09:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:09:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:09:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:09:58 compute-0 nova_compute[348325]: 2025-12-03 19:09:58.758 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:09:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:09:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:09:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:09:59 compute-0 podman[464323]: 2025-12-03 19:09:59.600784535 +0000 UTC m=+0.069244428 container create b8281741b536854c5d1fff0110f44e95fc4ea278958eaa7ba749141a8c2a8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 19:09:59 compute-0 systemd[1]: Started libpod-conmon-b8281741b536854c5d1fff0110f44e95fc4ea278958eaa7ba749141a8c2a8f5f.scope.
Dec  3 19:09:59 compute-0 podman[464323]: 2025-12-03 19:09:59.566053549 +0000 UTC m=+0.034513512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:09:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:09:59 compute-0 podman[464323]: 2025-12-03 19:09:59.715542673 +0000 UTC m=+0.184002586 container init b8281741b536854c5d1fff0110f44e95fc4ea278958eaa7ba749141a8c2a8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:09:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:09:59 compute-0 podman[464323]: 2025-12-03 19:09:59.727230551 +0000 UTC m=+0.195690424 container start b8281741b536854c5d1fff0110f44e95fc4ea278958eaa7ba749141a8c2a8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 19:09:59 compute-0 podman[464323]: 2025-12-03 19:09:59.731758599 +0000 UTC m=+0.200218562 container attach b8281741b536854c5d1fff0110f44e95fc4ea278958eaa7ba749141a8c2a8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 19:09:59 compute-0 kind_keller[464338]: 167 167
Dec  3 19:09:59 compute-0 systemd[1]: libpod-b8281741b536854c5d1fff0110f44e95fc4ea278958eaa7ba749141a8c2a8f5f.scope: Deactivated successfully.
Dec  3 19:09:59 compute-0 podman[464323]: 2025-12-03 19:09:59.740186429 +0000 UTC m=+0.208646342 container died b8281741b536854c5d1fff0110f44e95fc4ea278958eaa7ba749141a8c2a8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:09:59 compute-0 podman[158200]: time="2025-12-03T19:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:09:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45183 "" "Go-http-client/1.1"
Dec  3 19:09:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6db61fca519ca0703395de1b8377b933449373a5125d2eb9be098205b36ec5d4-merged.mount: Deactivated successfully.
Dec  3 19:09:59 compute-0 podman[464323]: 2025-12-03 19:09:59.839766907 +0000 UTC m=+0.308226790 container remove b8281741b536854c5d1fff0110f44e95fc4ea278958eaa7ba749141a8c2a8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:09:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec  3 19:09:59 compute-0 systemd[1]: libpod-conmon-b8281741b536854c5d1fff0110f44e95fc4ea278958eaa7ba749141a8c2a8f5f.scope: Deactivated successfully.
Dec  3 19:10:00 compute-0 podman[464362]: 2025-12-03 19:10:00.124789244 +0000 UTC m=+0.085440012 container create 1423c05c9f1d5f455c2429677d0fe2b6536565184dffde31833bf27736fa620a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:10:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:10:00 compute-0 podman[464362]: 2025-12-03 19:10:00.087688532 +0000 UTC m=+0.048339340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:10:00 compute-0 systemd[1]: Started libpod-conmon-1423c05c9f1d5f455c2429677d0fe2b6536565184dffde31833bf27736fa620a.scope.
Dec  3 19:10:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb3a2cf9cb8ce875c2b6daebe55771cc86b7a13c99dc389d4bc5910a8f2de06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb3a2cf9cb8ce875c2b6daebe55771cc86b7a13c99dc389d4bc5910a8f2de06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb3a2cf9cb8ce875c2b6daebe55771cc86b7a13c99dc389d4bc5910a8f2de06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb3a2cf9cb8ce875c2b6daebe55771cc86b7a13c99dc389d4bc5910a8f2de06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb3a2cf9cb8ce875c2b6daebe55771cc86b7a13c99dc389d4bc5910a8f2de06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:00 compute-0 podman[464362]: 2025-12-03 19:10:00.241369596 +0000 UTC m=+0.202020374 container init 1423c05c9f1d5f455c2429677d0fe2b6536565184dffde31833bf27736fa620a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:10:00 compute-0 podman[464362]: 2025-12-03 19:10:00.25623318 +0000 UTC m=+0.216883948 container start 1423c05c9f1d5f455c2429677d0fe2b6536565184dffde31833bf27736fa620a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:10:00 compute-0 podman[464362]: 2025-12-03 19:10:00.266540854 +0000 UTC m=+0.227191642 container attach 1423c05c9f1d5f455c2429677d0fe2b6536565184dffde31833bf27736fa620a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banach, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec  3 19:10:01 compute-0 openstack_network_exporter[365222]: ERROR   19:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:10:01 compute-0 openstack_network_exporter[365222]: ERROR   19:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:10:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:10:01 compute-0 openstack_network_exporter[365222]: ERROR   19:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:10:01 compute-0 openstack_network_exporter[365222]: ERROR   19:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:10:01 compute-0 openstack_network_exporter[365222]: ERROR   19:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:10:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:10:01 compute-0 mystifying_banach[464379]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:10:01 compute-0 mystifying_banach[464379]: --> relative data size: 1.0
Dec  3 19:10:01 compute-0 mystifying_banach[464379]: --> All data devices are unavailable
Dec  3 19:10:01 compute-0 systemd[1]: libpod-1423c05c9f1d5f455c2429677d0fe2b6536565184dffde31833bf27736fa620a.scope: Deactivated successfully.
Dec  3 19:10:01 compute-0 podman[464362]: 2025-12-03 19:10:01.513594096 +0000 UTC m=+1.474244874 container died 1423c05c9f1d5f455c2429677d0fe2b6536565184dffde31833bf27736fa620a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banach, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:10:01 compute-0 systemd[1]: libpod-1423c05c9f1d5f455c2429677d0fe2b6536565184dffde31833bf27736fa620a.scope: Consumed 1.168s CPU time.
Dec  3 19:10:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eb3a2cf9cb8ce875c2b6daebe55771cc86b7a13c99dc389d4bc5910a8f2de06-merged.mount: Deactivated successfully.
Dec  3 19:10:01 compute-0 podman[464362]: 2025-12-03 19:10:01.586755405 +0000 UTC m=+1.547406163 container remove 1423c05c9f1d5f455c2429677d0fe2b6536565184dffde31833bf27736fa620a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banach, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:10:01 compute-0 systemd[1]: libpod-conmon-1423c05c9f1d5f455c2429677d0fe2b6536565184dffde31833bf27736fa620a.scope: Deactivated successfully.
Dec  3 19:10:02 compute-0 nova_compute[348325]: 2025-12-03 19:10:02.091 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 85 B/s wr, 2 op/s
Dec  3 19:10:02 compute-0 podman[464559]: 2025-12-03 19:10:02.465169581 +0000 UTC m=+0.061124874 container create 0b8d085e13542fd1007e4e62d1085f26cea1383d83507a1c79e09985be55242c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:10:02 compute-0 systemd[1]: Started libpod-conmon-0b8d085e13542fd1007e4e62d1085f26cea1383d83507a1c79e09985be55242c.scope.
Dec  3 19:10:02 compute-0 podman[464559]: 2025-12-03 19:10:02.444159362 +0000 UTC m=+0.040114695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:10:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:10:02 compute-0 podman[464559]: 2025-12-03 19:10:02.579553951 +0000 UTC m=+0.175509274 container init 0b8d085e13542fd1007e4e62d1085f26cea1383d83507a1c79e09985be55242c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:10:02 compute-0 podman[464559]: 2025-12-03 19:10:02.5925343 +0000 UTC m=+0.188489593 container start 0b8d085e13542fd1007e4e62d1085f26cea1383d83507a1c79e09985be55242c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:10:02 compute-0 podman[464559]: 2025-12-03 19:10:02.596939434 +0000 UTC m=+0.192894907 container attach 0b8d085e13542fd1007e4e62d1085f26cea1383d83507a1c79e09985be55242c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:10:02 compute-0 distracted_mendel[464576]: 167 167
Dec  3 19:10:02 compute-0 podman[464559]: 2025-12-03 19:10:02.602103797 +0000 UTC m=+0.198059100 container died 0b8d085e13542fd1007e4e62d1085f26cea1383d83507a1c79e09985be55242c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:10:02 compute-0 systemd[1]: libpod-0b8d085e13542fd1007e4e62d1085f26cea1383d83507a1c79e09985be55242c.scope: Deactivated successfully.
Dec  3 19:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-35066c734f100aa69abfe63d8f529591bc9b64a7291fdf90d01dad84fb3d9283-merged.mount: Deactivated successfully.
Dec  3 19:10:02 compute-0 podman[464559]: 2025-12-03 19:10:02.64760505 +0000 UTC m=+0.243560343 container remove 0b8d085e13542fd1007e4e62d1085f26cea1383d83507a1c79e09985be55242c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec  3 19:10:02 compute-0 systemd[1]: libpod-conmon-0b8d085e13542fd1007e4e62d1085f26cea1383d83507a1c79e09985be55242c.scope: Deactivated successfully.
Dec  3 19:10:02 compute-0 podman[464599]: 2025-12-03 19:10:02.846535 +0000 UTC m=+0.061235978 container create d513d664393937f647b85d9c63e60aa6402590454dddaeba408bc30796762805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 19:10:02 compute-0 systemd[1]: Started libpod-conmon-d513d664393937f647b85d9c63e60aa6402590454dddaeba408bc30796762805.scope.
Dec  3 19:10:02 compute-0 podman[464599]: 2025-12-03 19:10:02.822420915 +0000 UTC m=+0.037121963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:10:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac186e5d2c0c60e773a5d3799f0be88f90b461141e602b4881c42698674ea5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac186e5d2c0c60e773a5d3799f0be88f90b461141e602b4881c42698674ea5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac186e5d2c0c60e773a5d3799f0be88f90b461141e602b4881c42698674ea5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac186e5d2c0c60e773a5d3799f0be88f90b461141e602b4881c42698674ea5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:02 compute-0 podman[464599]: 2025-12-03 19:10:02.961749739 +0000 UTC m=+0.176450737 container init d513d664393937f647b85d9c63e60aa6402590454dddaeba408bc30796762805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:10:02 compute-0 podman[464599]: 2025-12-03 19:10:02.978376864 +0000 UTC m=+0.193077812 container start d513d664393937f647b85d9c63e60aa6402590454dddaeba408bc30796762805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 19:10:02 compute-0 podman[464599]: 2025-12-03 19:10:02.988173827 +0000 UTC m=+0.202874805 container attach d513d664393937f647b85d9c63e60aa6402590454dddaeba408bc30796762805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 19:10:03 compute-0 great_dijkstra[464616]: {
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:    "0": [
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:        {
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "devices": [
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "/dev/loop3"
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            ],
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_name": "ceph_lv0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_size": "21470642176",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "name": "ceph_lv0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "tags": {
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.cluster_name": "ceph",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.crush_device_class": "",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.encrypted": "0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.osd_id": "0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.type": "block",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.vdo": "0"
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            },
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "type": "block",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "vg_name": "ceph_vg0"
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:        }
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:    ],
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:    "1": [
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:        {
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "devices": [
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "/dev/loop4"
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            ],
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_name": "ceph_lv1",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_size": "21470642176",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "name": "ceph_lv1",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "tags": {
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.cluster_name": "ceph",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.crush_device_class": "",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.encrypted": "0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.osd_id": "1",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.type": "block",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.vdo": "0"
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            },
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "type": "block",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "vg_name": "ceph_vg1"
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:        }
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:    ],
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:    "2": [
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:        {
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "devices": [
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "/dev/loop5"
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            ],
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_name": "ceph_lv2",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_size": "21470642176",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "name": "ceph_lv2",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "tags": {
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.cluster_name": "ceph",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.crush_device_class": "",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.encrypted": "0",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.osd_id": "2",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.type": "block",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:                "ceph.vdo": "0"
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            },
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "type": "block",
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:            "vg_name": "ceph_vg2"
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:        }
Dec  3 19:10:03 compute-0 great_dijkstra[464616]:    ]
Dec  3 19:10:03 compute-0 great_dijkstra[464616]: }
Dec  3 19:10:03 compute-0 systemd[1]: libpod-d513d664393937f647b85d9c63e60aa6402590454dddaeba408bc30796762805.scope: Deactivated successfully.
Dec  3 19:10:03 compute-0 podman[464599]: 2025-12-03 19:10:03.758097424 +0000 UTC m=+0.972798392 container died d513d664393937f647b85d9c63e60aa6402590454dddaeba408bc30796762805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 19:10:03 compute-0 nova_compute[348325]: 2025-12-03 19:10:03.762 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ac186e5d2c0c60e773a5d3799f0be88f90b461141e602b4881c42698674ea5c-merged.mount: Deactivated successfully.
Dec  3 19:10:03 compute-0 podman[464599]: 2025-12-03 19:10:03.848069243 +0000 UTC m=+1.062770201 container remove d513d664393937f647b85d9c63e60aa6402590454dddaeba408bc30796762805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dijkstra, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:10:03 compute-0 systemd[1]: libpod-conmon-d513d664393937f647b85d9c63e60aa6402590454dddaeba408bc30796762805.scope: Deactivated successfully.
Dec  3 19:10:04 compute-0 podman[464664]: 2025-12-03 19:10:04.097990335 +0000 UTC m=+0.082918693 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 19:10:04 compute-0 podman[464663]: 2025-12-03 19:10:04.106040116 +0000 UTC m=+0.100682475 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:10:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 85 B/s wr, 3 op/s
Dec  3 19:10:04 compute-0 podman[464665]: 2025-12-03 19:10:04.13938931 +0000 UTC m=+0.122254228 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=ubi9-minimal, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Dec  3 19:10:04 compute-0 podman[464836]: 2025-12-03 19:10:04.604892288 +0000 UTC m=+0.047104851 container create 5084896099e76279927b1c1f27a788d8d06493a623c944add84180cd7a1cb521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:10:04 compute-0 systemd[1]: Started libpod-conmon-5084896099e76279927b1c1f27a788d8d06493a623c944add84180cd7a1cb521.scope.
Dec  3 19:10:04 compute-0 podman[464836]: 2025-12-03 19:10:04.586519241 +0000 UTC m=+0.028731834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:10:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:10:04 compute-0 podman[464836]: 2025-12-03 19:10:04.708053861 +0000 UTC m=+0.150266454 container init 5084896099e76279927b1c1f27a788d8d06493a623c944add84180cd7a1cb521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 19:10:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:04 compute-0 podman[464836]: 2025-12-03 19:10:04.727508473 +0000 UTC m=+0.169721046 container start 5084896099e76279927b1c1f27a788d8d06493a623c944add84180cd7a1cb521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 19:10:04 compute-0 podman[464836]: 2025-12-03 19:10:04.731716183 +0000 UTC m=+0.173928776 container attach 5084896099e76279927b1c1f27a788d8d06493a623c944add84180cd7a1cb521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 19:10:04 compute-0 upbeat_elion[464851]: 167 167
Dec  3 19:10:04 compute-0 systemd[1]: libpod-5084896099e76279927b1c1f27a788d8d06493a623c944add84180cd7a1cb521.scope: Deactivated successfully.
Dec  3 19:10:04 compute-0 podman[464836]: 2025-12-03 19:10:04.739257122 +0000 UTC m=+0.181469695 container died 5084896099e76279927b1c1f27a788d8d06493a623c944add84180cd7a1cb521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 19:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcbe11cd0a43626bb45689af692259c8435a2baf0a26317554f59ca501af2805-merged.mount: Deactivated successfully.
Dec  3 19:10:04 compute-0 podman[464836]: 2025-12-03 19:10:04.796010762 +0000 UTC m=+0.238223345 container remove 5084896099e76279927b1c1f27a788d8d06493a623c944add84180cd7a1cb521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 19:10:04 compute-0 systemd[1]: libpod-conmon-5084896099e76279927b1c1f27a788d8d06493a623c944add84180cd7a1cb521.scope: Deactivated successfully.
Dec  3 19:10:05 compute-0 podman[464874]: 2025-12-03 19:10:05.002641606 +0000 UTC m=+0.052947980 container create 1ba8765692aebdc9fa6ffdfe1006cc9eefe8005f28038de32da39bced2fe2137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:10:05 compute-0 systemd[1]: Started libpod-conmon-1ba8765692aebdc9fa6ffdfe1006cc9eefe8005f28038de32da39bced2fe2137.scope.
Dec  3 19:10:05 compute-0 podman[464874]: 2025-12-03 19:10:04.981939503 +0000 UTC m=+0.032245897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:10:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1d97fa1d6347fa46c04a67b72fc946bf9fa0a2a7e1b923f79e12a0ef4353c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1d97fa1d6347fa46c04a67b72fc946bf9fa0a2a7e1b923f79e12a0ef4353c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1d97fa1d6347fa46c04a67b72fc946bf9fa0a2a7e1b923f79e12a0ef4353c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1d97fa1d6347fa46c04a67b72fc946bf9fa0a2a7e1b923f79e12a0ef4353c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:10:05 compute-0 podman[464874]: 2025-12-03 19:10:05.15217812 +0000 UTC m=+0.202484524 container init 1ba8765692aebdc9fa6ffdfe1006cc9eefe8005f28038de32da39bced2fe2137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:10:05 compute-0 podman[464874]: 2025-12-03 19:10:05.180634388 +0000 UTC m=+0.230940762 container start 1ba8765692aebdc9fa6ffdfe1006cc9eefe8005f28038de32da39bced2fe2137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 19:10:05 compute-0 podman[464874]: 2025-12-03 19:10:05.184960211 +0000 UTC m=+0.235266665 container attach 1ba8765692aebdc9fa6ffdfe1006cc9eefe8005f28038de32da39bced2fe2137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 19:10:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec  3 19:10:06 compute-0 cranky_saha[464890]: {
Dec  3 19:10:06 compute-0 cranky_saha[464890]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "osd_id": 1,
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "type": "bluestore"
Dec  3 19:10:06 compute-0 cranky_saha[464890]:    },
Dec  3 19:10:06 compute-0 cranky_saha[464890]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "osd_id": 2,
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "type": "bluestore"
Dec  3 19:10:06 compute-0 cranky_saha[464890]:    },
Dec  3 19:10:06 compute-0 cranky_saha[464890]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "osd_id": 0,
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:10:06 compute-0 cranky_saha[464890]:        "type": "bluestore"
Dec  3 19:10:06 compute-0 cranky_saha[464890]:    }
Dec  3 19:10:06 compute-0 cranky_saha[464890]: }
Dec  3 19:10:06 compute-0 systemd[1]: libpod-1ba8765692aebdc9fa6ffdfe1006cc9eefe8005f28038de32da39bced2fe2137.scope: Deactivated successfully.
Dec  3 19:10:06 compute-0 podman[464874]: 2025-12-03 19:10:06.217940042 +0000 UTC m=+1.268246486 container died 1ba8765692aebdc9fa6ffdfe1006cc9eefe8005f28038de32da39bced2fe2137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Dec  3 19:10:06 compute-0 systemd[1]: libpod-1ba8765692aebdc9fa6ffdfe1006cc9eefe8005f28038de32da39bced2fe2137.scope: Consumed 1.036s CPU time.
Dec  3 19:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a1d97fa1d6347fa46c04a67b72fc946bf9fa0a2a7e1b923f79e12a0ef4353c3-merged.mount: Deactivated successfully.
Dec  3 19:10:06 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Dec  3 19:10:06 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:06.988130) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:10:06 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Dec  3 19:10:06 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789006988324, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1012, "num_deletes": 251, "total_data_size": 1445748, "memory_usage": 1468576, "flush_reason": "Manual Compaction"}
Dec  3 19:10:06 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Dec  3 19:10:07 compute-0 nova_compute[348325]: 2025-12-03 19:10:07.095 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789007840350, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1432001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43020, "largest_seqno": 44031, "table_properties": {"data_size": 1426977, "index_size": 2548, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10772, "raw_average_key_size": 19, "raw_value_size": 1416976, "raw_average_value_size": 2595, "num_data_blocks": 114, "num_entries": 546, "num_filter_entries": 546, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764788910, "oldest_key_time": 1764788910, "file_creation_time": 1764789006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 852818 microseconds, and 8473 cpu microseconds.
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.840950) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1432001 bytes OK
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.840989) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.863152) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.863238) EVENT_LOG_v1 {"time_micros": 1764789007863215, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.863292) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1440957, prev total WAL file size 1442250, number of live WAL files 2.
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.867122) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1398KB)], [101(9619KB)]
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789007867268, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11282229, "oldest_snapshot_seqno": -1}
Dec  3 19:10:07 compute-0 podman[464874]: 2025-12-03 19:10:07.878965026 +0000 UTC m=+2.929271410 container remove 1ba8765692aebdc9fa6ffdfe1006cc9eefe8005f28038de32da39bced2fe2137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:10:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 5957 keys, 9545405 bytes, temperature: kUnknown
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789007963295, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9545405, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9505150, "index_size": 24273, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14917, "raw_key_size": 155106, "raw_average_key_size": 26, "raw_value_size": 9396952, "raw_average_value_size": 1577, "num_data_blocks": 968, "num_entries": 5957, "num_filter_entries": 5957, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764789007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.963618) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9545405 bytes
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.966630) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.4 rd, 99.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(14.5) write-amplify(6.7) OK, records in: 6471, records dropped: 514 output_compression: NoCompression
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.966651) EVENT_LOG_v1 {"time_micros": 1764789007966640, "job": 60, "event": "compaction_finished", "compaction_time_micros": 96095, "compaction_time_cpu_micros": 52913, "output_level": 6, "num_output_files": 1, "total_output_size": 9545405, "num_input_records": 6471, "num_output_records": 5957, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789007967304, "job": 60, "event": "table_file_deletion", "file_number": 103}
Dec  3 19:10:07 compute-0 systemd[1]: libpod-conmon-1ba8765692aebdc9fa6ffdfe1006cc9eefe8005f28038de32da39bced2fe2137.scope: Deactivated successfully.
Dec  3 19:10:07 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789007970411, "job": 60, "event": "table_file_deletion", "file_number": 101}
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.866136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.970569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.970579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.970581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.970583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:07.970585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:10:07 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:10:07 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev a4b92e09-2d71-4aa9-a773-b5deff861a38 does not exist
Dec  3 19:10:07 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev b7c1154b-455a-47de-a941-63786e4bfa47 does not exist
Dec  3 19:10:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.5 KiB/s wr, 4 op/s
Dec  3 19:10:08 compute-0 nova_compute[348325]: 2025-12-03 19:10:08.765 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:08 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:10:08 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:10:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  3 19:10:10 compute-0 podman[464985]: 2025-12-03 19:10:10.941044523 +0000 UTC m=+0.106720528 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 19:10:10 compute-0 podman[464984]: 2025-12-03 19:10:10.961227713 +0000 UTC m=+0.128081146 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, version=9.4, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, container_name=kepler, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Dec  3 19:10:10 compute-0 podman[464986]: 2025-12-03 19:10:10.966506068 +0000 UTC m=+0.119660985 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:10:12 compute-0 nova_compute[348325]: 2025-12-03 19:10:12.099 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  3 19:10:13 compute-0 nova_compute[348325]: 2025-12-03 19:10:13.769 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:10:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:10:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:10:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:10:14
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms', 'default.rgw.meta']
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 8.5 KiB/s wr, 2 op/s
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:10:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:10:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 8.5 KiB/s wr, 1 op/s
Dec  3 19:10:17 compute-0 nova_compute[348325]: 2025-12-03 19:10:17.101 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec  3 19:10:18 compute-0 nova_compute[348325]: 2025-12-03 19:10:18.774 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:18 compute-0 podman[465041]: 2025-12-03 19:10:18.946896109 +0000 UTC m=+0.111678287 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:10:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec  3 19:10:22 compute-0 nova_compute[348325]: 2025-12-03 19:10:22.104 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 op/s
Dec  3 19:10:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:10:23.368 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:10:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:10:23.369 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:10:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:10:23.371 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:10:23 compute-0 nova_compute[348325]: 2025-12-03 19:10:23.777 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 341 B/s wr, 0 op/s
Dec  3 19:10:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015202630518655988 of space, bias 1.0, pg target 0.45607891555967967 quantized to 32 (current 32)
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:10:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:10:24 compute-0 podman[465065]: 2025-12-03 19:10:24.991992928 +0000 UTC m=+0.143304428 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:10:25 compute-0 podman[465064]: 2025-12-03 19:10:25.010032797 +0000 UTC m=+0.181101646 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 19:10:25 compute-0 nova_compute[348325]: 2025-12-03 19:10:25.186 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:10:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 341 B/s wr, 0 op/s
Dec  3 19:10:26 compute-0 nova_compute[348325]: 2025-12-03 19:10:26.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:10:27 compute-0 nova_compute[348325]: 2025-12-03 19:10:27.108 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 19:10:28 compute-0 nova_compute[348325]: 2025-12-03 19:10:28.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:10:28 compute-0 nova_compute[348325]: 2025-12-03 19:10:28.780 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:29 compute-0 nova_compute[348325]: 2025-12-03 19:10:29.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:10:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:29 compute-0 podman[158200]: time="2025-12-03T19:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:10:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:10:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8655 "" "Go-http-client/1.1"
Dec  3 19:10:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 19:10:31 compute-0 openstack_network_exporter[365222]: ERROR   19:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:10:31 compute-0 openstack_network_exporter[365222]: ERROR   19:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:10:31 compute-0 openstack_network_exporter[365222]: ERROR   19:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:10:31 compute-0 openstack_network_exporter[365222]: ERROR   19:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:10:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:10:31 compute-0 openstack_network_exporter[365222]: ERROR   19:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:10:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:10:31 compute-0 nova_compute[348325]: 2025-12-03 19:10:31.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:10:32 compute-0 nova_compute[348325]: 2025-12-03 19:10:32.111 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 19:10:32 compute-0 nova_compute[348325]: 2025-12-03 19:10:32.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:10:33 compute-0 nova_compute[348325]: 2025-12-03 19:10:33.782 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec  3 19:10:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.734389) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789034734427, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 467, "num_deletes": 250, "total_data_size": 450881, "memory_usage": 459168, "flush_reason": "Manual Compaction"}
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789034742044, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 341689, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44032, "largest_seqno": 44498, "table_properties": {"data_size": 339187, "index_size": 602, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6722, "raw_average_key_size": 20, "raw_value_size": 334089, "raw_average_value_size": 1015, "num_data_blocks": 27, "num_entries": 329, "num_filter_entries": 329, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764789007, "oldest_key_time": 1764789007, "file_creation_time": 1764789034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 7770 microseconds, and 3318 cpu microseconds.
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.742152) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 341689 bytes OK
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.742184) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.745734) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.745760) EVENT_LOG_v1 {"time_micros": 1764789034745752, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.745784) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 448106, prev total WAL file size 448106, number of live WAL files 2.
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.747004) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(333KB)], [104(9321KB)]
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789034747115, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 9887094, "oldest_snapshot_seqno": -1}
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 5784 keys, 6692447 bytes, temperature: kUnknown
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789034803563, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 6692447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6657863, "index_size": 19050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 151670, "raw_average_key_size": 26, "raw_value_size": 6557129, "raw_average_value_size": 1133, "num_data_blocks": 751, "num_entries": 5784, "num_filter_entries": 5784, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764789034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.803786) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 6692447 bytes
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.806207) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.0 rd, 118.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.1 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(48.5) write-amplify(19.6) OK, records in: 6286, records dropped: 502 output_compression: NoCompression
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.806228) EVENT_LOG_v1 {"time_micros": 1764789034806218, "job": 62, "event": "compaction_finished", "compaction_time_micros": 56494, "compaction_time_cpu_micros": 38581, "output_level": 6, "num_output_files": 1, "total_output_size": 6692447, "num_input_records": 6286, "num_output_records": 5784, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789034806526, "job": 62, "event": "table_file_deletion", "file_number": 106}
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789034809729, "job": 62, "event": "table_file_deletion", "file_number": 104}
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.746600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.810001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.810009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.810014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.810019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:34 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:10:34.810024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:10:34 compute-0 podman[465109]: 2025-12-03 19:10:34.947935003 +0000 UTC m=+0.099115828 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Dec  3 19:10:34 compute-0 podman[465111]: 2025-12-03 19:10:34.949885579 +0000 UTC m=+0.102809005 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, vcs-type=git)
Dec  3 19:10:34 compute-0 podman[465110]: 2025-12-03 19:10:34.968505112 +0000 UTC m=+0.123187090 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 19:10:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec  3 19:10:36 compute-0 nova_compute[348325]: 2025-12-03 19:10:36.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:10:36 compute-0 nova_compute[348325]: 2025-12-03 19:10:36.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:10:36 compute-0 nova_compute[348325]: 2025-12-03 19:10:36.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:10:37 compute-0 nova_compute[348325]: 2025-12-03 19:10:37.114 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:37 compute-0 nova_compute[348325]: 2025-12-03 19:10:37.568 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:10:37 compute-0 nova_compute[348325]: 2025-12-03 19:10:37.569 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:10:37 compute-0 nova_compute[348325]: 2025-12-03 19:10:37.569 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:10:37 compute-0 nova_compute[348325]: 2025-12-03 19:10:37.570 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:10:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:10:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/874345631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:10:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:10:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/874345631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:10:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 1 op/s
Dec  3 19:10:38 compute-0 nova_compute[348325]: 2025-12-03 19:10:38.786 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 19:10:40 compute-0 nova_compute[348325]: 2025-12-03 19:10:40.624 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updating instance_info_cache with network_info: [{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:10:40 compute-0 nova_compute[348325]: 2025-12-03 19:10:40.641 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:10:40 compute-0 nova_compute[348325]: 2025-12-03 19:10:40.641 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:10:41 compute-0 nova_compute[348325]: 2025-12-03 19:10:41.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:10:41 compute-0 nova_compute[348325]: 2025-12-03 19:10:41.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:10:41 compute-0 podman[465173]: 2025-12-03 19:10:41.956720211 +0000 UTC m=+0.091698611 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 19:10:41 compute-0 podman[465172]: 2025-12-03 19:10:41.985064225 +0000 UTC m=+0.123421775 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:10:41 compute-0 podman[465171]: 2025-12-03 19:10:41.986936289 +0000 UTC m=+0.136602008 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, release=1214.1726694543, release-0.7.12=, vcs-type=git, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler)
Dec  3 19:10:42 compute-0 nova_compute[348325]: 2025-12-03 19:10:42.119 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 19:10:43 compute-0 nova_compute[348325]: 2025-12-03 19:10:43.788 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:10:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:10:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:10:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:10:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:10:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:10:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 19:10:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 19:10:47 compute-0 nova_compute[348325]: 2025-12-03 19:10:47.122 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:47 compute-0 nova_compute[348325]: 2025-12-03 19:10:47.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:10:47 compute-0 nova_compute[348325]: 2025-12-03 19:10:47.529 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:10:47 compute-0 nova_compute[348325]: 2025-12-03 19:10:47.529 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:10:47 compute-0 nova_compute[348325]: 2025-12-03 19:10:47.530 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:10:47 compute-0 nova_compute[348325]: 2025-12-03 19:10:47.530 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:10:47 compute-0 nova_compute[348325]: 2025-12-03 19:10:47.531 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:10:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:10:48 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/358054479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.033 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:10:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.369 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.369 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.377 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.377 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.747 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.748 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3507MB free_disk=59.89699935913086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.749 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.749 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.791 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.880 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.881 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a364994c-8442-4a4c-bd6b-f3a2d31e4483 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.881 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.881 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:10:48 compute-0 nova_compute[348325]: 2025-12-03 19:10:48.965 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:10:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:10:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080942008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:10:49 compute-0 nova_compute[348325]: 2025-12-03 19:10:49.491 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:10:49 compute-0 nova_compute[348325]: 2025-12-03 19:10:49.500 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:10:49 compute-0 nova_compute[348325]: 2025-12-03 19:10:49.528 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:10:49 compute-0 nova_compute[348325]: 2025-12-03 19:10:49.531 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:10:49 compute-0 nova_compute[348325]: 2025-12-03 19:10:49.532 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:10:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:49 compute-0 podman[465273]: 2025-12-03 19:10:49.99591959 +0000 UTC m=+0.139964019 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 19:10:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 19:10:51 compute-0 nova_compute[348325]: 2025-12-03 19:10:51.524 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:10:52 compute-0 nova_compute[348325]: 2025-12-03 19:10:52.125 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:10:53 compute-0 nova_compute[348325]: 2025-12-03 19:10:53.794 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 19:10:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:55 compute-0 podman[465298]: 2025-12-03 19:10:55.960612333 +0000 UTC m=+0.117342451 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:10:56 compute-0 podman[465297]: 2025-12-03 19:10:56.016660856 +0000 UTC m=+0.170478604 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  3 19:10:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 19:10:57 compute-0 nova_compute[348325]: 2025-12-03 19:10:57.129 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 19:10:58 compute-0 nova_compute[348325]: 2025-12-03 19:10:58.799 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:10:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:10:59 compute-0 podman[158200]: time="2025-12-03T19:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:10:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:10:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec  3 19:11:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 19:11:01 compute-0 openstack_network_exporter[365222]: ERROR   19:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:11:01 compute-0 openstack_network_exporter[365222]: ERROR   19:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:11:01 compute-0 openstack_network_exporter[365222]: ERROR   19:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:11:01 compute-0 openstack_network_exporter[365222]: ERROR   19:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:11:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:11:01 compute-0 openstack_network_exporter[365222]: ERROR   19:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:11:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:11:02 compute-0 nova_compute[348325]: 2025-12-03 19:11:02.133 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 19:11:03 compute-0 nova_compute[348325]: 2025-12-03 19:11:03.803 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 19:11:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:05 compute-0 podman[465346]: 2025-12-03 19:11:05.957941705 +0000 UTC m=+0.095717638 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 19:11:05 compute-0 podman[465347]: 2025-12-03 19:11:05.983717807 +0000 UTC m=+0.114991025 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm)
Dec  3 19:11:06 compute-0 podman[465345]: 2025-12-03 19:11:06.013082445 +0000 UTC m=+0.151716058 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  3 19:11:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:07 compute-0 nova_compute[348325]: 2025-12-03 19:11:07.148 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:08 compute-0 nova_compute[348325]: 2025-12-03 19:11:08.806 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:09 compute-0 podman[465573]: 2025-12-03 19:11:09.870203878 +0000 UTC m=+0.166074330 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:11:10 compute-0 podman[465573]: 2025-12-03 19:11:10.002645158 +0000 UTC m=+0.298515580 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 19:11:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:11:11 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:11:11 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:11:11 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:11:11 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:11:11 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:11:12 compute-0 nova_compute[348325]: 2025-12-03 19:11:12.153 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:11:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:11:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:11:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:11:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:11:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:11:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev a7c43ebf-60d4-4ac2-835b-1ac867df4b08 does not exist
Dec  3 19:11:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9f969fc6-d206-4900-abb9-8a4bb89a4480 does not exist
Dec  3 19:11:12 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 8ab0956a-2452-498b-bdd2-cb76e69af48e does not exist
Dec  3 19:11:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:11:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:11:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:11:12 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:11:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:11:12 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:11:12 compute-0 podman[465874]: 2025-12-03 19:11:12.919502192 +0000 UTC m=+0.120589368 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 19:11:12 compute-0 podman[465873]: 2025-12-03 19:11:12.937964411 +0000 UTC m=+0.139255732 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 19:11:12 compute-0 podman[465875]: 2025-12-03 19:11:12.937799957 +0000 UTC m=+0.126131840 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.259 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.262 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.295 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a4fc45c7-44e4-4b50-a3e0-98de13268f88', 'name': 'te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.302 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a364994c-8442-4a4c-bd6b-f3a2d31e4483', 'name': 'te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T19:11:13.304061) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.312 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.319 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.320 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.321 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.321 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.322 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.323 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.323 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.325 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.325 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.326 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.327 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T19:11:13.321233) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.328 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T19:11:13.323030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.328 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T19:11:13.325040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.328 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.329 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T19:11:13.327370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T19:11:13.329607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.350 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.351 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.374 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.375 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.376 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.376 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.376 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.377 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.377 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.377 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T19:11:13.376381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.378 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.378 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T19:11:13.378094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:11:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:11:13 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.404 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/memory.usage volume: 42.515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.443 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/memory.usage volume: 42.45703125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.444 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.444 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.445 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.445 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.446 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.446 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.447 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T19:11:13.445707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.448 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.448 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.448 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.449 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.449 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.449 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.450 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.450 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.451 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.451 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T19:11:13.449227) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.452 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.452 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.452 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.453 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T19:11:13.453763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.500 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 30382592 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.502 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.551 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.bytes volume: 31209984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.552 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.553 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.554 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 1826201908 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.554 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 148336564 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.554 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.latency volume: 2412766472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.554 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.latency volume: 170015198 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.555 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.555 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T19:11:13.553577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.556 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.556 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.556 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.requests volume: 1142 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.557 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.558 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T19:11:13.555939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.558 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.559 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.559 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T19:11:13.558397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.559 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.560 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.560 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.560 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.561 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.561 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.561 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T19:11:13.560605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.562 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.562 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.563 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.563 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.563 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.564 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.564 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.564 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T19:11:13.562777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.564 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.564 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 9240506883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.565 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.565 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.latency volume: 7835856569 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T19:11:13.564374) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.566 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.566 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.567 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 340 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.567 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.568 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.requests volume: 302 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.568 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.569 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.570 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T19:11:13.567169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T19:11:13.569706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.570 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.570 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.572 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T19:11:13.571262) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.572 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.573 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T19:11:13.572765) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.574 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T19:11:13.574392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T19:11:13.575832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.576 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.576 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.577 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.577 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/cpu volume: 335890000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.577 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/cpu volume: 334560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T19:11:13.577380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:11:13.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:11:13 compute-0 nova_compute[348325]: 2025-12-03 19:11:13.809 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:13 compute-0 podman[466044]: 2025-12-03 19:11:13.829784036 +0000 UTC m=+0.090723118 container create 29b11b0880f8f8c62880847ed7f64212fd7718bd86aeb951d9b157b2d65ef9bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:11:13 compute-0 podman[466044]: 2025-12-03 19:11:13.801011651 +0000 UTC m=+0.061950773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:11:13 compute-0 systemd[1]: Started libpod-conmon-29b11b0880f8f8c62880847ed7f64212fd7718bd86aeb951d9b157b2d65ef9bb.scope.
Dec  3 19:11:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:11:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:11:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:11:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:11:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:11:14 compute-0 podman[466044]: 2025-12-03 19:11:14.015327967 +0000 UTC m=+0.276267129 container init 29b11b0880f8f8c62880847ed7f64212fd7718bd86aeb951d9b157b2d65ef9bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:11:14 compute-0 podman[466044]: 2025-12-03 19:11:14.030004527 +0000 UTC m=+0.290943609 container start 29b11b0880f8f8c62880847ed7f64212fd7718bd86aeb951d9b157b2d65ef9bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 19:11:14 compute-0 podman[466044]: 2025-12-03 19:11:14.035539898 +0000 UTC m=+0.296478970 container attach 29b11b0880f8f8c62880847ed7f64212fd7718bd86aeb951d9b157b2d65ef9bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:11:14 compute-0 jolly_dubinsky[466060]: 167 167
Dec  3 19:11:14 compute-0 systemd[1]: libpod-29b11b0880f8f8c62880847ed7f64212fd7718bd86aeb951d9b157b2d65ef9bb.scope: Deactivated successfully.
Dec  3 19:11:14 compute-0 podman[466044]: 2025-12-03 19:11:14.044003539 +0000 UTC m=+0.304942641 container died 29b11b0880f8f8c62880847ed7f64212fd7718bd86aeb951d9b157b2d65ef9bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:11:14
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images', 'backups', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log']
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-387def07c5d268679cb9bfe2480aaeb5f0e1b2ec1143f904ef0d1e8dd6d01be1-merged.mount: Deactivated successfully.
Dec  3 19:11:14 compute-0 podman[466044]: 2025-12-03 19:11:14.131891679 +0000 UTC m=+0.392830761 container remove 29b11b0880f8f8c62880847ed7f64212fd7718bd86aeb951d9b157b2d65ef9bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dubinsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:11:14 compute-0 systemd[1]: libpod-conmon-29b11b0880f8f8c62880847ed7f64212fd7718bd86aeb951d9b157b2d65ef9bb.scope: Deactivated successfully.
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:14 compute-0 podman[466085]: 2025-12-03 19:11:14.410164635 +0000 UTC m=+0.102506228 container create 00e7d9a7e05fa6b486712a31a6d3c987c0d88b63f8b2710d6d7cf20231cb7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:11:14 compute-0 podman[466085]: 2025-12-03 19:11:14.371354223 +0000 UTC m=+0.063695866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:11:14 compute-0 systemd[1]: Started libpod-conmon-00e7d9a7e05fa6b486712a31a6d3c987c0d88b63f8b2710d6d7cf20231cb7100.scope.
Dec  3 19:11:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c816dcfecc2cda1a8576dceef30cfd12a829d5d57267c30b819b6dcfebcf884/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c816dcfecc2cda1a8576dceef30cfd12a829d5d57267c30b819b6dcfebcf884/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c816dcfecc2cda1a8576dceef30cfd12a829d5d57267c30b819b6dcfebcf884/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c816dcfecc2cda1a8576dceef30cfd12a829d5d57267c30b819b6dcfebcf884/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c816dcfecc2cda1a8576dceef30cfd12a829d5d57267c30b819b6dcfebcf884/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:14 compute-0 podman[466085]: 2025-12-03 19:11:14.622226008 +0000 UTC m=+0.314567601 container init 00e7d9a7e05fa6b486712a31a6d3c987c0d88b63f8b2710d6d7cf20231cb7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leakey, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 19:11:14 compute-0 podman[466085]: 2025-12-03 19:11:14.645088571 +0000 UTC m=+0.337430124 container start 00e7d9a7e05fa6b486712a31a6d3c987c0d88b63f8b2710d6d7cf20231cb7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 19:11:14 compute-0 podman[466085]: 2025-12-03 19:11:14.649760213 +0000 UTC m=+0.342101816 container attach 00e7d9a7e05fa6b486712a31a6d3c987c0d88b63f8b2710d6d7cf20231cb7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leakey, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:11:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:11:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:16 compute-0 goofy_leakey[466101]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:11:16 compute-0 goofy_leakey[466101]: --> relative data size: 1.0
Dec  3 19:11:16 compute-0 goofy_leakey[466101]: --> All data devices are unavailable
Dec  3 19:11:16 compute-0 systemd[1]: libpod-00e7d9a7e05fa6b486712a31a6d3c987c0d88b63f8b2710d6d7cf20231cb7100.scope: Deactivated successfully.
Dec  3 19:11:16 compute-0 systemd[1]: libpod-00e7d9a7e05fa6b486712a31a6d3c987c0d88b63f8b2710d6d7cf20231cb7100.scope: Consumed 1.367s CPU time.
Dec  3 19:11:16 compute-0 podman[466085]: 2025-12-03 19:11:16.086820002 +0000 UTC m=+1.779161585 container died 00e7d9a7e05fa6b486712a31a6d3c987c0d88b63f8b2710d6d7cf20231cb7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 19:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c816dcfecc2cda1a8576dceef30cfd12a829d5d57267c30b819b6dcfebcf884-merged.mount: Deactivated successfully.
Dec  3 19:11:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:16 compute-0 podman[466085]: 2025-12-03 19:11:16.204890369 +0000 UTC m=+1.897231922 container remove 00e7d9a7e05fa6b486712a31a6d3c987c0d88b63f8b2710d6d7cf20231cb7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 19:11:16 compute-0 systemd[1]: libpod-conmon-00e7d9a7e05fa6b486712a31a6d3c987c0d88b63f8b2710d6d7cf20231cb7100.scope: Deactivated successfully.
Dec  3 19:11:17 compute-0 nova_compute[348325]: 2025-12-03 19:11:17.159 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:17 compute-0 podman[466282]: 2025-12-03 19:11:17.356353788 +0000 UTC m=+0.084187662 container create 8fba54f157cedd2de03f3599b9ed2990828041846dadd94225edf0d5f00b6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_babbage, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:11:17 compute-0 podman[466282]: 2025-12-03 19:11:17.323884495 +0000 UTC m=+0.051718399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:11:17 compute-0 systemd[1]: Started libpod-conmon-8fba54f157cedd2de03f3599b9ed2990828041846dadd94225edf0d5f00b6d47.scope.
Dec  3 19:11:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:11:17 compute-0 podman[466282]: 2025-12-03 19:11:17.535349883 +0000 UTC m=+0.263183797 container init 8fba54f157cedd2de03f3599b9ed2990828041846dadd94225edf0d5f00b6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_babbage, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:11:17 compute-0 podman[466282]: 2025-12-03 19:11:17.553782223 +0000 UTC m=+0.281616087 container start 8fba54f157cedd2de03f3599b9ed2990828041846dadd94225edf0d5f00b6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_babbage, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 19:11:17 compute-0 podman[466282]: 2025-12-03 19:11:17.558902904 +0000 UTC m=+0.286736848 container attach 8fba54f157cedd2de03f3599b9ed2990828041846dadd94225edf0d5f00b6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:11:17 compute-0 busy_babbage[466298]: 167 167
Dec  3 19:11:17 compute-0 systemd[1]: libpod-8fba54f157cedd2de03f3599b9ed2990828041846dadd94225edf0d5f00b6d47.scope: Deactivated successfully.
Dec  3 19:11:17 compute-0 podman[466282]: 2025-12-03 19:11:17.567313174 +0000 UTC m=+0.295147038 container died 8fba54f157cedd2de03f3599b9ed2990828041846dadd94225edf0d5f00b6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 19:11:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e644b66eca52917fe0f940145786273354ac9a5fbb925ae616c1dad6bee17a57-merged.mount: Deactivated successfully.
Dec  3 19:11:17 compute-0 podman[466282]: 2025-12-03 19:11:17.627172447 +0000 UTC m=+0.355006321 container remove 8fba54f157cedd2de03f3599b9ed2990828041846dadd94225edf0d5f00b6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_babbage, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:11:17 compute-0 systemd[1]: libpod-conmon-8fba54f157cedd2de03f3599b9ed2990828041846dadd94225edf0d5f00b6d47.scope: Deactivated successfully.
Dec  3 19:11:17 compute-0 podman[466320]: 2025-12-03 19:11:17.89099223 +0000 UTC m=+0.089121140 container create 2eb9f61248d545b14f5937e8d34c40ecddcfbcb25d5a5893bd9cb48b5bbae5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 19:11:17 compute-0 podman[466320]: 2025-12-03 19:11:17.851923352 +0000 UTC m=+0.050052332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:11:17 compute-0 systemd[1]: Started libpod-conmon-2eb9f61248d545b14f5937e8d34c40ecddcfbcb25d5a5893bd9cb48b5bbae5a8.scope.
Dec  3 19:11:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34bfa531829a10ceadb96168f5b9df76acf89fed85fdcae5c0eccdd19237ef3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34bfa531829a10ceadb96168f5b9df76acf89fed85fdcae5c0eccdd19237ef3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34bfa531829a10ceadb96168f5b9df76acf89fed85fdcae5c0eccdd19237ef3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34bfa531829a10ceadb96168f5b9df76acf89fed85fdcae5c0eccdd19237ef3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:18 compute-0 podman[466320]: 2025-12-03 19:11:18.035413394 +0000 UTC m=+0.233542364 container init 2eb9f61248d545b14f5937e8d34c40ecddcfbcb25d5a5893bd9cb48b5bbae5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 19:11:18 compute-0 podman[466320]: 2025-12-03 19:11:18.046420425 +0000 UTC m=+0.244549325 container start 2eb9f61248d545b14f5937e8d34c40ecddcfbcb25d5a5893bd9cb48b5bbae5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 19:11:18 compute-0 podman[466320]: 2025-12-03 19:11:18.051782943 +0000 UTC m=+0.249911933 container attach 2eb9f61248d545b14f5937e8d34c40ecddcfbcb25d5a5893bd9cb48b5bbae5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 19:11:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:18 compute-0 nova_compute[348325]: 2025-12-03 19:11:18.813 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:18 compute-0 elated_liskov[466335]: {
Dec  3 19:11:18 compute-0 elated_liskov[466335]:    "0": [
Dec  3 19:11:18 compute-0 elated_liskov[466335]:        {
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "devices": [
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "/dev/loop3"
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            ],
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_name": "ceph_lv0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_size": "21470642176",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "name": "ceph_lv0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "tags": {
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.cluster_name": "ceph",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.crush_device_class": "",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.encrypted": "0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.osd_id": "0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.type": "block",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.vdo": "0"
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            },
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "type": "block",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "vg_name": "ceph_vg0"
Dec  3 19:11:18 compute-0 elated_liskov[466335]:        }
Dec  3 19:11:18 compute-0 elated_liskov[466335]:    ],
Dec  3 19:11:18 compute-0 elated_liskov[466335]:    "1": [
Dec  3 19:11:18 compute-0 elated_liskov[466335]:        {
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "devices": [
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "/dev/loop4"
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            ],
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_name": "ceph_lv1",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_size": "21470642176",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "name": "ceph_lv1",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "tags": {
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.cluster_name": "ceph",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.crush_device_class": "",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.encrypted": "0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.osd_id": "1",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.type": "block",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.vdo": "0"
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            },
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "type": "block",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "vg_name": "ceph_vg1"
Dec  3 19:11:18 compute-0 elated_liskov[466335]:        }
Dec  3 19:11:18 compute-0 elated_liskov[466335]:    ],
Dec  3 19:11:18 compute-0 elated_liskov[466335]:    "2": [
Dec  3 19:11:18 compute-0 elated_liskov[466335]:        {
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "devices": [
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "/dev/loop5"
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            ],
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_name": "ceph_lv2",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_size": "21470642176",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "name": "ceph_lv2",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "tags": {
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.cluster_name": "ceph",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.crush_device_class": "",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.encrypted": "0",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.osd_id": "2",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.type": "block",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:                "ceph.vdo": "0"
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            },
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "type": "block",
Dec  3 19:11:18 compute-0 elated_liskov[466335]:            "vg_name": "ceph_vg2"
Dec  3 19:11:18 compute-0 elated_liskov[466335]:        }
Dec  3 19:11:18 compute-0 elated_liskov[466335]:    ]
Dec  3 19:11:18 compute-0 elated_liskov[466335]: }
Dec  3 19:11:18 compute-0 systemd[1]: libpod-2eb9f61248d545b14f5937e8d34c40ecddcfbcb25d5a5893bd9cb48b5bbae5a8.scope: Deactivated successfully.
Dec  3 19:11:18 compute-0 podman[466320]: 2025-12-03 19:11:18.926708656 +0000 UTC m=+1.124837556 container died 2eb9f61248d545b14f5937e8d34c40ecddcfbcb25d5a5893bd9cb48b5bbae5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d34bfa531829a10ceadb96168f5b9df76acf89fed85fdcae5c0eccdd19237ef3-merged.mount: Deactivated successfully.
Dec  3 19:11:19 compute-0 podman[466320]: 2025-12-03 19:11:19.038375912 +0000 UTC m=+1.236504812 container remove 2eb9f61248d545b14f5937e8d34c40ecddcfbcb25d5a5893bd9cb48b5bbae5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 19:11:19 compute-0 systemd[1]: libpod-conmon-2eb9f61248d545b14f5937e8d34c40ecddcfbcb25d5a5893bd9cb48b5bbae5a8.scope: Deactivated successfully.
Dec  3 19:11:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:20 compute-0 podman[466494]: 2025-12-03 19:11:20.091953112 +0000 UTC m=+0.078194769 container create b1c48f7e1268c9f670887bad4a1af8599cd44415d614823cef9cbef64e297d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 19:11:20 compute-0 podman[466494]: 2025-12-03 19:11:20.05734732 +0000 UTC m=+0.043588957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:11:20 compute-0 systemd[1]: Started libpod-conmon-b1c48f7e1268c9f670887bad4a1af8599cd44415d614823cef9cbef64e297d55.scope.
Dec  3 19:11:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:11:20 compute-0 podman[466494]: 2025-12-03 19:11:20.220209422 +0000 UTC m=+0.206451049 container init b1c48f7e1268c9f670887bad4a1af8599cd44415d614823cef9cbef64e297d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nobel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 19:11:20 compute-0 podman[466494]: 2025-12-03 19:11:20.237050933 +0000 UTC m=+0.223292540 container start b1c48f7e1268c9f670887bad4a1af8599cd44415d614823cef9cbef64e297d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nobel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 19:11:20 compute-0 podman[466494]: 2025-12-03 19:11:20.242559514 +0000 UTC m=+0.228801121 container attach b1c48f7e1268c9f670887bad4a1af8599cd44415d614823cef9cbef64e297d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 19:11:20 compute-0 upbeat_nobel[466511]: 167 167
Dec  3 19:11:20 compute-0 systemd[1]: libpod-b1c48f7e1268c9f670887bad4a1af8599cd44415d614823cef9cbef64e297d55.scope: Deactivated successfully.
Dec  3 19:11:20 compute-0 podman[466494]: 2025-12-03 19:11:20.246697262 +0000 UTC m=+0.232938859 container died b1c48f7e1268c9f670887bad4a1af8599cd44415d614823cef9cbef64e297d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nobel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 19:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-796ef75a0702f229991d026dab3cbba8871c86ca0b7f4039a3b1ea2b5881952c-merged.mount: Deactivated successfully.
Dec  3 19:11:20 compute-0 podman[466508]: 2025-12-03 19:11:20.284281366 +0000 UTC m=+0.110234852 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 19:11:20 compute-0 podman[466494]: 2025-12-03 19:11:20.305224524 +0000 UTC m=+0.291466131 container remove b1c48f7e1268c9f670887bad4a1af8599cd44415d614823cef9cbef64e297d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:11:20 compute-0 systemd[1]: libpod-conmon-b1c48f7e1268c9f670887bad4a1af8599cd44415d614823cef9cbef64e297d55.scope: Deactivated successfully.
Dec  3 19:11:20 compute-0 podman[466556]: 2025-12-03 19:11:20.550190689 +0000 UTC m=+0.075943588 container create 677e836ac103489eef9fae360a48490e254b2946587c04047b40df76249b23a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 19:11:20 compute-0 systemd[1]: Started libpod-conmon-677e836ac103489eef9fae360a48490e254b2946587c04047b40df76249b23a2.scope.
Dec  3 19:11:20 compute-0 podman[466556]: 2025-12-03 19:11:20.528930413 +0000 UTC m=+0.054683332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:11:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279cd8633f22f5f806ab27186c4c39ad44097184d2219775f0e11819037f5b94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279cd8633f22f5f806ab27186c4c39ad44097184d2219775f0e11819037f5b94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279cd8633f22f5f806ab27186c4c39ad44097184d2219775f0e11819037f5b94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279cd8633f22f5f806ab27186c4c39ad44097184d2219775f0e11819037f5b94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:11:20 compute-0 podman[466556]: 2025-12-03 19:11:20.707704423 +0000 UTC m=+0.233457342 container init 677e836ac103489eef9fae360a48490e254b2946587c04047b40df76249b23a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:11:20 compute-0 podman[466556]: 2025-12-03 19:11:20.732716038 +0000 UTC m=+0.258468937 container start 677e836ac103489eef9fae360a48490e254b2946587c04047b40df76249b23a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:11:20 compute-0 podman[466556]: 2025-12-03 19:11:20.738224659 +0000 UTC m=+0.263977578 container attach 677e836ac103489eef9fae360a48490e254b2946587c04047b40df76249b23a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 19:11:21 compute-0 friendly_swirles[466572]: {
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "osd_id": 1,
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "type": "bluestore"
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:    },
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "osd_id": 2,
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "type": "bluestore"
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:    },
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "osd_id": 0,
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:        "type": "bluestore"
Dec  3 19:11:21 compute-0 friendly_swirles[466572]:    }
Dec  3 19:11:21 compute-0 friendly_swirles[466572]: }
Dec  3 19:11:22 compute-0 systemd[1]: libpod-677e836ac103489eef9fae360a48490e254b2946587c04047b40df76249b23a2.scope: Deactivated successfully.
Dec  3 19:11:22 compute-0 podman[466556]: 2025-12-03 19:11:22.036849717 +0000 UTC m=+1.562602656 container died 677e836ac103489eef9fae360a48490e254b2946587c04047b40df76249b23a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:11:22 compute-0 systemd[1]: libpod-677e836ac103489eef9fae360a48490e254b2946587c04047b40df76249b23a2.scope: Consumed 1.296s CPU time.
Dec  3 19:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-279cd8633f22f5f806ab27186c4c39ad44097184d2219775f0e11819037f5b94-merged.mount: Deactivated successfully.
Dec  3 19:11:22 compute-0 podman[466556]: 2025-12-03 19:11:22.149942146 +0000 UTC m=+1.675695045 container remove 677e836ac103489eef9fae360a48490e254b2946587c04047b40df76249b23a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:11:22 compute-0 nova_compute[348325]: 2025-12-03 19:11:22.164 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:22 compute-0 systemd[1]: libpod-conmon-677e836ac103489eef9fae360a48490e254b2946587c04047b40df76249b23a2.scope: Deactivated successfully.
Dec  3 19:11:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:11:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:11:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:11:22 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:11:22 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev dbb8b2ac-eb7f-4a61-b4b0-49758f51962d does not exist
Dec  3 19:11:22 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4ff92f93-34eb-4c1f-a536-51328e9799fd does not exist
Dec  3 19:11:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:11:23 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:11:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:11:23.369 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:11:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:11:23.372 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:11:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:11:23.374 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:11:23 compute-0 nova_compute[348325]: 2025-12-03 19:11:23.818 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015216620474376506 of space, bias 1.0, pg target 0.4564986142312952 quantized to 32 (current 32)
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:11:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:11:25 compute-0 nova_compute[348325]: 2025-12-03 19:11:25.515 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:11:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:26 compute-0 podman[466668]: 2025-12-03 19:11:26.981119159 +0000 UTC m=+0.129841658 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:11:27 compute-0 podman[466667]: 2025-12-03 19:11:27.067864162 +0000 UTC m=+0.216528490 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  3 19:11:27 compute-0 nova_compute[348325]: 2025-12-03 19:11:27.171 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:28 compute-0 nova_compute[348325]: 2025-12-03 19:11:28.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:11:28 compute-0 nova_compute[348325]: 2025-12-03 19:11:28.822 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:29 compute-0 podman[158200]: time="2025-12-03T19:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:11:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:11:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8653 "" "Go-http-client/1.1"
Dec  3 19:11:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:30 compute-0 nova_compute[348325]: 2025-12-03 19:11:30.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:11:31 compute-0 openstack_network_exporter[365222]: ERROR   19:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:11:31 compute-0 openstack_network_exporter[365222]: ERROR   19:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:11:31 compute-0 openstack_network_exporter[365222]: ERROR   19:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:11:31 compute-0 openstack_network_exporter[365222]: ERROR   19:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:11:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:11:31 compute-0 openstack_network_exporter[365222]: ERROR   19:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:11:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:11:31 compute-0 nova_compute[348325]: 2025-12-03 19:11:31.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:11:31 compute-0 nova_compute[348325]: 2025-12-03 19:11:31.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:11:32 compute-0 nova_compute[348325]: 2025-12-03 19:11:32.175 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:33 compute-0 nova_compute[348325]: 2025-12-03 19:11:33.825 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:34 compute-0 nova_compute[348325]: 2025-12-03 19:11:34.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:11:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:37 compute-0 podman[466711]: 2025-12-03 19:11:37.005187118 +0000 UTC m=+0.142145450 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Dec  3 19:11:37 compute-0 podman[466709]: 2025-12-03 19:11:37.007101903 +0000 UTC m=+0.154332770 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:11:37 compute-0 podman[466710]: 2025-12-03 19:11:37.03257618 +0000 UTC m=+0.173273271 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:11:37 compute-0 nova_compute[348325]: 2025-12-03 19:11:37.182 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:37 compute-0 nova_compute[348325]: 2025-12-03 19:11:37.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:11:37 compute-0 nova_compute[348325]: 2025-12-03 19:11:37.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:11:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:11:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3325708644' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:11:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:11:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3325708644' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:11:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:38 compute-0 nova_compute[348325]: 2025-12-03 19:11:38.603 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:11:38 compute-0 nova_compute[348325]: 2025-12-03 19:11:38.604 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:11:38 compute-0 nova_compute[348325]: 2025-12-03 19:11:38.605 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:11:38 compute-0 nova_compute[348325]: 2025-12-03 19:11:38.828 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:41 compute-0 nova_compute[348325]: 2025-12-03 19:11:41.638 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updating instance_info_cache with network_info: [{"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:11:41 compute-0 nova_compute[348325]: 2025-12-03 19:11:41.657 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:11:41 compute-0 nova_compute[348325]: 2025-12-03 19:11:41.657 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:11:41 compute-0 nova_compute[348325]: 2025-12-03 19:11:41.658 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:11:41 compute-0 nova_compute[348325]: 2025-12-03 19:11:41.658 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:11:42 compute-0 nova_compute[348325]: 2025-12-03 19:11:42.187 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:43 compute-0 nova_compute[348325]: 2025-12-03 19:11:43.831 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:43 compute-0 podman[466771]: 2025-12-03 19:11:43.97432861 +0000 UTC m=+0.111643206 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 19:11:43 compute-0 podman[466772]: 2025-12-03 19:11:43.980316302 +0000 UTC m=+0.106111564 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 19:11:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:11:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:11:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:11:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:11:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:11:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:11:44 compute-0 podman[466770]: 2025-12-03 19:11:44.007191581 +0000 UTC m=+0.148792818 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9)
Dec  3 19:11:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:47 compute-0 nova_compute[348325]: 2025-12-03 19:11:47.193 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:48 compute-0 nova_compute[348325]: 2025-12-03 19:11:48.835 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:49 compute-0 nova_compute[348325]: 2025-12-03 19:11:49.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:11:49 compute-0 nova_compute[348325]: 2025-12-03 19:11:49.534 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:11:49 compute-0 nova_compute[348325]: 2025-12-03 19:11:49.538 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:11:49 compute-0 nova_compute[348325]: 2025-12-03 19:11:49.539 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:11:49 compute-0 nova_compute[348325]: 2025-12-03 19:11:49.540 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:11:49 compute-0 nova_compute[348325]: 2025-12-03 19:11:49.541 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:11:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:11:50 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956507936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:11:50 compute-0 nova_compute[348325]: 2025-12-03 19:11:50.089 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:11:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:50 compute-0 nova_compute[348325]: 2025-12-03 19:11:50.357 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:11:50 compute-0 nova_compute[348325]: 2025-12-03 19:11:50.358 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:11:50 compute-0 nova_compute[348325]: 2025-12-03 19:11:50.368 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:11:50 compute-0 nova_compute[348325]: 2025-12-03 19:11:50.369 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:11:50 compute-0 podman[466852]: 2025-12-03 19:11:50.985340016 +0000 UTC m=+0.130662648 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 19:11:50 compute-0 nova_compute[348325]: 2025-12-03 19:11:50.997 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:11:50 compute-0 nova_compute[348325]: 2025-12-03 19:11:50.998 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3510MB free_disk=59.89699935913086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:11:50 compute-0 nova_compute[348325]: 2025-12-03 19:11:50.998 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:11:50 compute-0 nova_compute[348325]: 2025-12-03 19:11:50.999 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:11:51 compute-0 nova_compute[348325]: 2025-12-03 19:11:51.108 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:11:51 compute-0 nova_compute[348325]: 2025-12-03 19:11:51.109 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a364994c-8442-4a4c-bd6b-f3a2d31e4483 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:11:51 compute-0 nova_compute[348325]: 2025-12-03 19:11:51.109 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:11:51 compute-0 nova_compute[348325]: 2025-12-03 19:11:51.109 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:11:51 compute-0 nova_compute[348325]: 2025-12-03 19:11:51.188 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:11:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:11:51 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1970727438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:11:51 compute-0 nova_compute[348325]: 2025-12-03 19:11:51.706 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:11:51 compute-0 nova_compute[348325]: 2025-12-03 19:11:51.722 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:11:51 compute-0 nova_compute[348325]: 2025-12-03 19:11:51.970 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:11:51 compute-0 nova_compute[348325]: 2025-12-03 19:11:51.974 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:11:51 compute-0 nova_compute[348325]: 2025-12-03 19:11:51.975 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:11:52 compute-0 nova_compute[348325]: 2025-12-03 19:11:52.198 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:53 compute-0 nova_compute[348325]: 2025-12-03 19:11:53.838 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:57 compute-0 nova_compute[348325]: 2025-12-03 19:11:57.203 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:57 compute-0 podman[466897]: 2025-12-03 19:11:57.989919147 +0000 UTC m=+0.132310848 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 19:11:58 compute-0 podman[466896]: 2025-12-03 19:11:58.056699884 +0000 UTC m=+0.213184450 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 19:11:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:11:58 compute-0 nova_compute[348325]: 2025-12-03 19:11:58.841 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:11:59 compute-0 podman[158200]: time="2025-12-03T19:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:11:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:11:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:11:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8651 "" "Go-http-client/1.1"
Dec  3 19:12:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:01 compute-0 openstack_network_exporter[365222]: ERROR   19:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:12:01 compute-0 openstack_network_exporter[365222]: ERROR   19:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:12:01 compute-0 openstack_network_exporter[365222]: ERROR   19:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:12:01 compute-0 openstack_network_exporter[365222]: ERROR   19:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:12:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:12:01 compute-0 openstack_network_exporter[365222]: ERROR   19:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:12:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:12:02 compute-0 nova_compute[348325]: 2025-12-03 19:12:02.208 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:03 compute-0 nova_compute[348325]: 2025-12-03 19:12:03.844 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:07 compute-0 nova_compute[348325]: 2025-12-03 19:12:07.215 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:07 compute-0 podman[466943]: 2025-12-03 19:12:07.965777825 +0000 UTC m=+0.107671242 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 19:12:07 compute-0 podman[466944]: 2025-12-03 19:12:07.9681481 +0000 UTC m=+0.111147493 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 19:12:08 compute-0 podman[466945]: 2025-12-03 19:12:08.004218238 +0000 UTC m=+0.144796773 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350)
Dec  3 19:12:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:08 compute-0 nova_compute[348325]: 2025-12-03 19:12:08.848 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:12 compute-0 nova_compute[348325]: 2025-12-03 19:12:12.220 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:13 compute-0 nova_compute[348325]: 2025-12-03 19:12:13.852 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:12:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:12:14
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'default.rgw.log', '.rgw.root', 'vms', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta']
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:12:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:12:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:14 compute-0 podman[467007]: 2025-12-03 19:12:14.837755212 +0000 UTC m=+0.124505942 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Dec  3 19:12:14 compute-0 podman[467009]: 2025-12-03 19:12:14.855914873 +0000 UTC m=+0.114797200 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:12:14 compute-0 podman[467008]: 2025-12-03 19:12:14.857223315 +0000 UTC m=+0.127391910 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 19:12:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:17 compute-0 nova_compute[348325]: 2025-12-03 19:12:17.225 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:18 compute-0 nova_compute[348325]: 2025-12-03 19:12:18.860 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:21 compute-0 podman[467064]: 2025-12-03 19:12:21.949038231 +0000 UTC m=+0.109988767 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 19:12:22 compute-0 nova_compute[348325]: 2025-12-03 19:12:22.228 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:12:23.370 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:12:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:12:23.372 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:12:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:12:23.373 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:12:23 compute-0 nova_compute[348325]: 2025-12-03 19:12:23.864 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:12:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:12:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:12:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:12:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:12:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:12:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev da2d34ff-f9c5-4e55-b6d0-83eb649969c2 does not exist
Dec  3 19:12:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev dc6bc642-783d-4aef-82d5-a2be2deb6872 does not exist
Dec  3 19:12:23 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev bfa8ed97-e007-4893-9ca1-df8107d1491a does not exist
Dec  3 19:12:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:12:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:12:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:12:23 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:12:23 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:12:23 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:12:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:12:24 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:12:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015216620474376506 of space, bias 1.0, pg target 0.4564986142312952 quantized to 32 (current 32)
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:12:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:12:25 compute-0 podman[467355]: 2025-12-03 19:12:25.212728853 +0000 UTC m=+0.084161172 container create b4e75f89df76144e42dbc2002a5b4c80c26addfe75f1a6fe8561567518efaa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:12:25 compute-0 podman[467355]: 2025-12-03 19:12:25.179536873 +0000 UTC m=+0.050969232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:12:25 compute-0 systemd[1]: Started libpod-conmon-b4e75f89df76144e42dbc2002a5b4c80c26addfe75f1a6fe8561567518efaa39.scope.
Dec  3 19:12:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:12:25 compute-0 podman[467355]: 2025-12-03 19:12:25.402589007 +0000 UTC m=+0.274021376 container init b4e75f89df76144e42dbc2002a5b4c80c26addfe75f1a6fe8561567518efaa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 19:12:25 compute-0 podman[467355]: 2025-12-03 19:12:25.422098011 +0000 UTC m=+0.293530330 container start b4e75f89df76144e42dbc2002a5b4c80c26addfe75f1a6fe8561567518efaa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:12:25 compute-0 podman[467355]: 2025-12-03 19:12:25.428645357 +0000 UTC m=+0.300077676 container attach b4e75f89df76144e42dbc2002a5b4c80c26addfe75f1a6fe8561567518efaa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 19:12:25 compute-0 priceless_cartwright[467371]: 167 167
Dec  3 19:12:25 compute-0 systemd[1]: libpod-b4e75f89df76144e42dbc2002a5b4c80c26addfe75f1a6fe8561567518efaa39.scope: Deactivated successfully.
Dec  3 19:12:25 compute-0 podman[467355]: 2025-12-03 19:12:25.440810576 +0000 UTC m=+0.312242895 container died b4e75f89df76144e42dbc2002a5b4c80c26addfe75f1a6fe8561567518efaa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 19:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8670f5e6b156821ab999e44220d70244d53944ed8366339b96f65fb669169174-merged.mount: Deactivated successfully.
Dec  3 19:12:25 compute-0 podman[467355]: 2025-12-03 19:12:25.53981758 +0000 UTC m=+0.411249869 container remove b4e75f89df76144e42dbc2002a5b4c80c26addfe75f1a6fe8561567518efaa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:12:25 compute-0 systemd[1]: libpod-conmon-b4e75f89df76144e42dbc2002a5b4c80c26addfe75f1a6fe8561567518efaa39.scope: Deactivated successfully.
Dec  3 19:12:25 compute-0 podman[467393]: 2025-12-03 19:12:25.797710381 +0000 UTC m=+0.071645814 container create 7f0cbea36d2f7e94bae9f0b3eac781c304296665bdc2d394b94cc0ae6887a247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 19:12:25 compute-0 podman[467393]: 2025-12-03 19:12:25.772685697 +0000 UTC m=+0.046621160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:12:25 compute-0 systemd[1]: Started libpod-conmon-7f0cbea36d2f7e94bae9f0b3eac781c304296665bdc2d394b94cc0ae6887a247.scope.
Dec  3 19:12:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc3e93d7666c885a5324fe38bb4d3796592718ba91acfe9d9376774594ee96d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc3e93d7666c885a5324fe38bb4d3796592718ba91acfe9d9376774594ee96d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc3e93d7666c885a5324fe38bb4d3796592718ba91acfe9d9376774594ee96d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc3e93d7666c885a5324fe38bb4d3796592718ba91acfe9d9376774594ee96d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc3e93d7666c885a5324fe38bb4d3796592718ba91acfe9d9376774594ee96d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:25 compute-0 podman[467393]: 2025-12-03 19:12:25.988774145 +0000 UTC m=+0.262709628 container init 7f0cbea36d2f7e94bae9f0b3eac781c304296665bdc2d394b94cc0ae6887a247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_beaver, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:12:26 compute-0 podman[467393]: 2025-12-03 19:12:26.014634389 +0000 UTC m=+0.288569822 container start 7f0cbea36d2f7e94bae9f0b3eac781c304296665bdc2d394b94cc0ae6887a247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_beaver, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec  3 19:12:26 compute-0 podman[467393]: 2025-12-03 19:12:26.021126884 +0000 UTC m=+0.295062347 container attach 7f0cbea36d2f7e94bae9f0b3eac781c304296665bdc2d394b94cc0ae6887a247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:12:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:12:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 9971 writes, 45K keys, 9971 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 9971 writes, 9971 syncs, 1.00 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1326 writes, 6014 keys, 1326 commit groups, 1.0 writes per commit group, ingest: 8.68 MB, 0.01 MB/s#012Interval WAL: 1326 writes, 1326 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     40.0      1.39              0.26        31    0.045       0      0       0.0       0.0#012  L6      1/0    6.38 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1    122.0    100.5      2.28              0.94        30    0.076    160K    16K       0.0       0.0#012 Sum      1/0    6.38 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1     75.9     77.6      3.67              1.20        61    0.060    160K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.5     36.3     35.5      1.28              0.23        10    0.128     31K   2563       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    122.0    100.5      2.28              0.94        30    0.076    160K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     40.2      1.38              0.26        30    0.046       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.054, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.28 GB write, 0.07 MB/s write, 0.27 GB read, 0.07 MB/s read, 3.7 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55911062f1f0#2 capacity: 304.00 MB usage: 32.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000339 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2083,31.13 MB,10.2393%) FilterBlock(62,456.05 KB,0.146499%) IndexBlock(62,756.52 KB,0.243021%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 19:12:27 compute-0 nova_compute[348325]: 2025-12-03 19:12:27.232 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:27 compute-0 unruffled_beaver[467410]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:12:27 compute-0 unruffled_beaver[467410]: --> relative data size: 1.0
Dec  3 19:12:27 compute-0 unruffled_beaver[467410]: --> All data devices are unavailable
Dec  3 19:12:27 compute-0 systemd[1]: libpod-7f0cbea36d2f7e94bae9f0b3eac781c304296665bdc2d394b94cc0ae6887a247.scope: Deactivated successfully.
Dec  3 19:12:27 compute-0 systemd[1]: libpod-7f0cbea36d2f7e94bae9f0b3eac781c304296665bdc2d394b94cc0ae6887a247.scope: Consumed 1.479s CPU time.
Dec  3 19:12:27 compute-0 podman[467439]: 2025-12-03 19:12:27.680956389 +0000 UTC m=+0.055147651 container died 7f0cbea36d2f7e94bae9f0b3eac781c304296665bdc2d394b94cc0ae6887a247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cc3e93d7666c885a5324fe38bb4d3796592718ba91acfe9d9376774594ee96d-merged.mount: Deactivated successfully.
Dec  3 19:12:27 compute-0 podman[467439]: 2025-12-03 19:12:27.7961945 +0000 UTC m=+0.170385732 container remove 7f0cbea36d2f7e94bae9f0b3eac781c304296665bdc2d394b94cc0ae6887a247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 19:12:27 compute-0 systemd[1]: libpod-conmon-7f0cbea36d2f7e94bae9f0b3eac781c304296665bdc2d394b94cc0ae6887a247.scope: Deactivated successfully.
Dec  3 19:12:27 compute-0 nova_compute[348325]: 2025-12-03 19:12:27.966 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:12:28 compute-0 podman[467477]: 2025-12-03 19:12:28.215180742 +0000 UTC m=+0.160042266 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true)
Dec  3 19:12:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:28 compute-0 podman[467519]: 2025-12-03 19:12:28.397864587 +0000 UTC m=+0.177560694 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  3 19:12:28 compute-0 nova_compute[348325]: 2025-12-03 19:12:28.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:12:28 compute-0 nova_compute[348325]: 2025-12-03 19:12:28.868 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:29 compute-0 podman[467638]: 2025-12-03 19:12:29.123396458 +0000 UTC m=+0.113824459 container create 1ab508d632b981e8284feb409797f817cc8f94793812f56ccec1d3aabcefd2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 19:12:29 compute-0 podman[467638]: 2025-12-03 19:12:29.083870987 +0000 UTC m=+0.074298988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:12:29 compute-0 systemd[1]: Started libpod-conmon-1ab508d632b981e8284feb409797f817cc8f94793812f56ccec1d3aabcefd2f9.scope.
Dec  3 19:12:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:12:29 compute-0 podman[467638]: 2025-12-03 19:12:29.305332944 +0000 UTC m=+0.295760945 container init 1ab508d632b981e8284feb409797f817cc8f94793812f56ccec1d3aabcefd2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_boyd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:12:29 compute-0 podman[467638]: 2025-12-03 19:12:29.325308028 +0000 UTC m=+0.315736029 container start 1ab508d632b981e8284feb409797f817cc8f94793812f56ccec1d3aabcefd2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 19:12:29 compute-0 podman[467638]: 2025-12-03 19:12:29.332247543 +0000 UTC m=+0.322675584 container attach 1ab508d632b981e8284feb409797f817cc8f94793812f56ccec1d3aabcefd2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 19:12:29 compute-0 bold_boyd[467654]: 167 167
Dec  3 19:12:29 compute-0 systemd[1]: libpod-1ab508d632b981e8284feb409797f817cc8f94793812f56ccec1d3aabcefd2f9.scope: Deactivated successfully.
Dec  3 19:12:29 compute-0 conmon[467654]: conmon 1ab508d632b981e8284f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ab508d632b981e8284feb409797f817cc8f94793812f56ccec1d3aabcefd2f9.scope/container/memory.events
Dec  3 19:12:29 compute-0 podman[467638]: 2025-12-03 19:12:29.343846859 +0000 UTC m=+0.334274860 container died 1ab508d632b981e8284feb409797f817cc8f94793812f56ccec1d3aabcefd2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_boyd, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-88f4f9e26f32544019b2e583504f7f24d856617d99c7e90a75bbf3852fa93779-merged.mount: Deactivated successfully.
Dec  3 19:12:29 compute-0 podman[467638]: 2025-12-03 19:12:29.442862983 +0000 UTC m=+0.433290974 container remove 1ab508d632b981e8284feb409797f817cc8f94793812f56ccec1d3aabcefd2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_boyd, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 19:12:29 compute-0 systemd[1]: libpod-conmon-1ab508d632b981e8284feb409797f817cc8f94793812f56ccec1d3aabcefd2f9.scope: Deactivated successfully.
Dec  3 19:12:29 compute-0 podman[467678]: 2025-12-03 19:12:29.699033474 +0000 UTC m=+0.072875723 container create 7c2ae1f0a5fc5a9d401e51ad0ab51cd88db1c8941bbcaa6aabd0652195681fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:12:29 compute-0 podman[158200]: time="2025-12-03T19:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:12:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:29 compute-0 podman[467678]: 2025-12-03 19:12:29.672115555 +0000 UTC m=+0.045957784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:12:29 compute-0 systemd[1]: Started libpod-conmon-7c2ae1f0a5fc5a9d401e51ad0ab51cd88db1c8941bbcaa6aabd0652195681fdc.scope.
Dec  3 19:12:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952444378c3a02dfc63f8960afdf943891d32f7bd06b714f1ff7ca31a61a3de7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952444378c3a02dfc63f8960afdf943891d32f7bd06b714f1ff7ca31a61a3de7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952444378c3a02dfc63f8960afdf943891d32f7bd06b714f1ff7ca31a61a3de7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952444378c3a02dfc63f8960afdf943891d32f7bd06b714f1ff7ca31a61a3de7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:29 compute-0 podman[467678]: 2025-12-03 19:12:29.931934302 +0000 UTC m=+0.305776551 container init 7c2ae1f0a5fc5a9d401e51ad0ab51cd88db1c8941bbcaa6aabd0652195681fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 19:12:29 compute-0 podman[467678]: 2025-12-03 19:12:29.951867886 +0000 UTC m=+0.325710095 container start 7c2ae1f0a5fc5a9d401e51ad0ab51cd88db1c8941bbcaa6aabd0652195681fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 19:12:29 compute-0 podman[467678]: 2025-12-03 19:12:29.957177042 +0000 UTC m=+0.331019301 container attach 7c2ae1f0a5fc5a9d401e51ad0ab51cd88db1c8941bbcaa6aabd0652195681fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 19:12:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45381 "" "Go-http-client/1.1"
Dec  3 19:12:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9062 "" "Go-http-client/1.1"
Dec  3 19:12:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]: {
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:    "0": [
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:        {
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "devices": [
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "/dev/loop3"
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            ],
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_name": "ceph_lv0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_size": "21470642176",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "name": "ceph_lv0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "tags": {
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.cluster_name": "ceph",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.crush_device_class": "",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.encrypted": "0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.osd_id": "0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.type": "block",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.vdo": "0"
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            },
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "type": "block",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "vg_name": "ceph_vg0"
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:        }
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:    ],
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:    "1": [
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:        {
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "devices": [
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "/dev/loop4"
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            ],
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_name": "ceph_lv1",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_size": "21470642176",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "name": "ceph_lv1",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "tags": {
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.cluster_name": "ceph",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.crush_device_class": "",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.encrypted": "0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.osd_id": "1",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.type": "block",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.vdo": "0"
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            },
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "type": "block",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "vg_name": "ceph_vg1"
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:        }
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:    ],
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:    "2": [
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:        {
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "devices": [
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "/dev/loop5"
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            ],
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_name": "ceph_lv2",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_size": "21470642176",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "name": "ceph_lv2",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "tags": {
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.cluster_name": "ceph",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.crush_device_class": "",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.encrypted": "0",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.osd_id": "2",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.type": "block",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:                "ceph.vdo": "0"
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            },
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "type": "block",
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:            "vg_name": "ceph_vg2"
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:        }
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]:    ]
Dec  3 19:12:30 compute-0 dazzling_ramanujan[467694]: }
Dec  3 19:12:30 compute-0 systemd[1]: libpod-7c2ae1f0a5fc5a9d401e51ad0ab51cd88db1c8941bbcaa6aabd0652195681fdc.scope: Deactivated successfully.
Dec  3 19:12:30 compute-0 podman[467678]: 2025-12-03 19:12:30.824870304 +0000 UTC m=+1.198712533 container died 7c2ae1f0a5fc5a9d401e51ad0ab51cd88db1c8941bbcaa6aabd0652195681fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 19:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-952444378c3a02dfc63f8960afdf943891d32f7bd06b714f1ff7ca31a61a3de7-merged.mount: Deactivated successfully.
Dec  3 19:12:30 compute-0 podman[467678]: 2025-12-03 19:12:30.947392606 +0000 UTC m=+1.321234825 container remove 7c2ae1f0a5fc5a9d401e51ad0ab51cd88db1c8941bbcaa6aabd0652195681fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 19:12:30 compute-0 systemd[1]: libpod-conmon-7c2ae1f0a5fc5a9d401e51ad0ab51cd88db1c8941bbcaa6aabd0652195681fdc.scope: Deactivated successfully.
Dec  3 19:12:31 compute-0 openstack_network_exporter[365222]: ERROR   19:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:12:31 compute-0 openstack_network_exporter[365222]: ERROR   19:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:12:31 compute-0 openstack_network_exporter[365222]: ERROR   19:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:12:31 compute-0 openstack_network_exporter[365222]: ERROR   19:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:12:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:12:31 compute-0 openstack_network_exporter[365222]: ERROR   19:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:12:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:12:31 compute-0 nova_compute[348325]: 2025-12-03 19:12:31.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:12:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:32 compute-0 nova_compute[348325]: 2025-12-03 19:12:32.240 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:32 compute-0 podman[467853]: 2025-12-03 19:12:32.362169796 +0000 UTC m=+0.105094629 container create 40b022c96ea8638cfcb5eb0e550c6f4464a0a1162788ea2e29b7c413d59abe63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 19:12:32 compute-0 podman[467853]: 2025-12-03 19:12:32.321416868 +0000 UTC m=+0.064341771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:12:32 compute-0 systemd[1]: Started libpod-conmon-40b022c96ea8638cfcb5eb0e550c6f4464a0a1162788ea2e29b7c413d59abe63.scope.
Dec  3 19:12:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:12:32 compute-0 nova_compute[348325]: 2025-12-03 19:12:32.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:12:32 compute-0 podman[467853]: 2025-12-03 19:12:32.518310589 +0000 UTC m=+0.261235482 container init 40b022c96ea8638cfcb5eb0e550c6f4464a0a1162788ea2e29b7c413d59abe63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 19:12:32 compute-0 podman[467853]: 2025-12-03 19:12:32.536608883 +0000 UTC m=+0.279533686 container start 40b022c96ea8638cfcb5eb0e550c6f4464a0a1162788ea2e29b7c413d59abe63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:12:32 compute-0 podman[467853]: 2025-12-03 19:12:32.543650871 +0000 UTC m=+0.286575694 container attach 40b022c96ea8638cfcb5eb0e550c6f4464a0a1162788ea2e29b7c413d59abe63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 19:12:32 compute-0 vigilant_albattani[467869]: 167 167
Dec  3 19:12:32 compute-0 systemd[1]: libpod-40b022c96ea8638cfcb5eb0e550c6f4464a0a1162788ea2e29b7c413d59abe63.scope: Deactivated successfully.
Dec  3 19:12:32 compute-0 podman[467853]: 2025-12-03 19:12:32.550080744 +0000 UTC m=+0.293005597 container died 40b022c96ea8638cfcb5eb0e550c6f4464a0a1162788ea2e29b7c413d59abe63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:12:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c42ad4a2e878fa300b0d1385cc7a4e35e339efbbd0127b0e9d456f69adef604d-merged.mount: Deactivated successfully.
Dec  3 19:12:32 compute-0 podman[467853]: 2025-12-03 19:12:32.614140448 +0000 UTC m=+0.357065251 container remove 40b022c96ea8638cfcb5eb0e550c6f4464a0a1162788ea2e29b7c413d59abe63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 19:12:32 compute-0 systemd[1]: libpod-conmon-40b022c96ea8638cfcb5eb0e550c6f4464a0a1162788ea2e29b7c413d59abe63.scope: Deactivated successfully.
Dec  3 19:12:32 compute-0 podman[467892]: 2025-12-03 19:12:32.917754697 +0000 UTC m=+0.093699809 container create dd050f88510d70689eb20b06b19c3bc8a13a4b375e58c88778f34ca54c2b52ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 19:12:32 compute-0 podman[467892]: 2025-12-03 19:12:32.889159266 +0000 UTC m=+0.065104398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:12:33 compute-0 systemd[1]: Started libpod-conmon-dd050f88510d70689eb20b06b19c3bc8a13a4b375e58c88778f34ca54c2b52ea.scope.
Dec  3 19:12:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da9c7e87714ad73b46a0c7442e6a02751fe76bf268336386ac81a87048379cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da9c7e87714ad73b46a0c7442e6a02751fe76bf268336386ac81a87048379cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da9c7e87714ad73b46a0c7442e6a02751fe76bf268336386ac81a87048379cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da9c7e87714ad73b46a0c7442e6a02751fe76bf268336386ac81a87048379cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:12:33 compute-0 podman[467892]: 2025-12-03 19:12:33.09035468 +0000 UTC m=+0.266299782 container init dd050f88510d70689eb20b06b19c3bc8a13a4b375e58c88778f34ca54c2b52ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 19:12:33 compute-0 podman[467892]: 2025-12-03 19:12:33.109397813 +0000 UTC m=+0.285342895 container start dd050f88510d70689eb20b06b19c3bc8a13a4b375e58c88778f34ca54c2b52ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 19:12:33 compute-0 podman[467892]: 2025-12-03 19:12:33.115807145 +0000 UTC m=+0.291752227 container attach dd050f88510d70689eb20b06b19c3bc8a13a4b375e58c88778f34ca54c2b52ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:12:33 compute-0 nova_compute[348325]: 2025-12-03 19:12:33.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:12:33 compute-0 nova_compute[348325]: 2025-12-03 19:12:33.873 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:34 compute-0 stupefied_easley[467908]: {
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "osd_id": 1,
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "type": "bluestore"
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:    },
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "osd_id": 2,
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "type": "bluestore"
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:    },
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "osd_id": 0,
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:        "type": "bluestore"
Dec  3 19:12:34 compute-0 stupefied_easley[467908]:    }
Dec  3 19:12:34 compute-0 stupefied_easley[467908]: }
Dec  3 19:12:34 compute-0 systemd[1]: libpod-dd050f88510d70689eb20b06b19c3bc8a13a4b375e58c88778f34ca54c2b52ea.scope: Deactivated successfully.
Dec  3 19:12:34 compute-0 systemd[1]: libpod-dd050f88510d70689eb20b06b19c3bc8a13a4b375e58c88778f34ca54c2b52ea.scope: Consumed 1.241s CPU time.
Dec  3 19:12:34 compute-0 podman[467892]: 2025-12-03 19:12:34.367080437 +0000 UTC m=+1.543025529 container died dd050f88510d70689eb20b06b19c3bc8a13a4b375e58c88778f34ca54c2b52ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 19:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7da9c7e87714ad73b46a0c7442e6a02751fe76bf268336386ac81a87048379cf-merged.mount: Deactivated successfully.
Dec  3 19:12:34 compute-0 podman[467892]: 2025-12-03 19:12:34.444789055 +0000 UTC m=+1.620734127 container remove dd050f88510d70689eb20b06b19c3bc8a13a4b375e58c88778f34ca54c2b52ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 19:12:34 compute-0 systemd[1]: libpod-conmon-dd050f88510d70689eb20b06b19c3bc8a13a4b375e58c88778f34ca54c2b52ea.scope: Deactivated successfully.
Dec  3 19:12:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:12:34 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:12:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:12:34 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:12:34 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev facb872a-aaeb-4eae-aa9e-0b2514928671 does not exist
Dec  3 19:12:34 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6775b425-369e-4783-bcb1-e178c1e8d79a does not exist
Dec  3 19:12:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:35 compute-0 nova_compute[348325]: 2025-12-03 19:12:35.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:12:35 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:12:35 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:12:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:37 compute-0 nova_compute[348325]: 2025-12-03 19:12:37.244 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:37 compute-0 nova_compute[348325]: 2025-12-03 19:12:37.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:12:37 compute-0 nova_compute[348325]: 2025-12-03 19:12:37.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:12:37 compute-0 nova_compute[348325]: 2025-12-03 19:12:37.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:12:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:12:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/488236704' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:12:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:12:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/488236704' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:12:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:38 compute-0 nova_compute[348325]: 2025-12-03 19:12:38.641 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:12:38 compute-0 nova_compute[348325]: 2025-12-03 19:12:38.642 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:12:38 compute-0 nova_compute[348325]: 2025-12-03 19:12:38.642 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:12:38 compute-0 nova_compute[348325]: 2025-12-03 19:12:38.642 348329 DEBUG nova.objects.instance [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lazy-loading 'info_cache' on Instance uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:12:38 compute-0 nova_compute[348325]: 2025-12-03 19:12:38.878 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:38 compute-0 podman[468003]: 2025-12-03 19:12:38.995276922 +0000 UTC m=+0.130876442 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 19:12:39 compute-0 podman[468002]: 2025-12-03 19:12:39.010412462 +0000 UTC m=+0.150227192 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 19:12:39 compute-0 podman[468001]: 2025-12-03 19:12:39.013018564 +0000 UTC m=+0.157889165 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Dec  3 19:12:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:41 compute-0 nova_compute[348325]: 2025-12-03 19:12:41.654 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updating instance_info_cache with network_info: [{"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:12:41 compute-0 nova_compute[348325]: 2025-12-03 19:12:41.674 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a4fc45c7-44e4-4b50-a3e0-98de13268f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:12:41 compute-0 nova_compute[348325]: 2025-12-03 19:12:41.675 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:12:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:42 compute-0 nova_compute[348325]: 2025-12-03 19:12:42.248 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:43 compute-0 nova_compute[348325]: 2025-12-03 19:12:43.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:12:43 compute-0 nova_compute[348325]: 2025-12-03 19:12:43.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:12:43 compute-0 nova_compute[348325]: 2025-12-03 19:12:43.884 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:12:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:12:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:12:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:12:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:12:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:12:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:45 compute-0 podman[468062]: 2025-12-03 19:12:45.976761694 +0000 UTC m=+0.115242462 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:12:45 compute-0 podman[468061]: 2025-12-03 19:12:45.985282036 +0000 UTC m=+0.130165536 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, architecture=x86_64)
Dec  3 19:12:46 compute-0 podman[468063]: 2025-12-03 19:12:46.012745959 +0000 UTC m=+0.143108984 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 19:12:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:47 compute-0 nova_compute[348325]: 2025-12-03 19:12:47.252 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:48 compute-0 nova_compute[348325]: 2025-12-03 19:12:48.887 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:49 compute-0 nova_compute[348325]: 2025-12-03 19:12:49.480 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:12:49 compute-0 nova_compute[348325]: 2025-12-03 19:12:49.508 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:12:49 compute-0 nova_compute[348325]: 2025-12-03 19:12:49.544 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:12:49 compute-0 nova_compute[348325]: 2025-12-03 19:12:49.545 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:12:49 compute-0 nova_compute[348325]: 2025-12-03 19:12:49.546 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:12:49 compute-0 nova_compute[348325]: 2025-12-03 19:12:49.547 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:12:49 compute-0 nova_compute[348325]: 2025-12-03 19:12:49.548 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:12:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:12:50 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/853861095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:12:50 compute-0 nova_compute[348325]: 2025-12-03 19:12:50.113 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:12:50 compute-0 nova_compute[348325]: 2025-12-03 19:12:50.244 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:12:50 compute-0 nova_compute[348325]: 2025-12-03 19:12:50.246 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:12:50 compute-0 nova_compute[348325]: 2025-12-03 19:12:50.255 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:12:50 compute-0 nova_compute[348325]: 2025-12-03 19:12:50.256 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:12:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:50 compute-0 nova_compute[348325]: 2025-12-03 19:12:50.954 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:12:50 compute-0 nova_compute[348325]: 2025-12-03 19:12:50.955 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3475MB free_disk=59.89699935913086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:12:50 compute-0 nova_compute[348325]: 2025-12-03 19:12:50.956 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:12:50 compute-0 nova_compute[348325]: 2025-12-03 19:12:50.956 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.065 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.065 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a364994c-8442-4a4c-bd6b-f3a2d31e4483 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.066 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.066 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.081 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing inventories for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.108 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating ProviderTree inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.110 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.124 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing aggregate associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.143 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing trait associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, traits: COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_AVX2,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.205 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:12:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:12:51 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/484094717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.724 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.734 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.755 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.756 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:12:51 compute-0 nova_compute[348325]: 2025-12-03 19:12:51.757 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:12:52 compute-0 nova_compute[348325]: 2025-12-03 19:12:52.257 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:52 compute-0 podman[468163]: 2025-12-03 19:12:52.777810064 +0000 UTC m=+0.129888089 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 19:12:53 compute-0 nova_compute[348325]: 2025-12-03 19:12:53.890 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:57 compute-0 nova_compute[348325]: 2025-12-03 19:12:57.264 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:12:58 compute-0 nova_compute[348325]: 2025-12-03 19:12:58.894 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:12:58 compute-0 podman[468188]: 2025-12-03 19:12:58.991615703 +0000 UTC m=+0.134653782 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec  3 19:12:59 compute-0 podman[468187]: 2025-12-03 19:12:59.053984066 +0000 UTC m=+0.196132925 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:12:59 compute-0 podman[158200]: time="2025-12-03T19:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:12:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:12:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:12:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8650 "" "Go-http-client/1.1"
Dec  3 19:13:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:01 compute-0 openstack_network_exporter[365222]: ERROR   19:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:13:01 compute-0 openstack_network_exporter[365222]: ERROR   19:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:13:01 compute-0 openstack_network_exporter[365222]: ERROR   19:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:13:01 compute-0 openstack_network_exporter[365222]: ERROR   19:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:13:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:13:01 compute-0 openstack_network_exporter[365222]: ERROR   19:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:13:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:13:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:02 compute-0 nova_compute[348325]: 2025-12-03 19:13:02.272 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:03 compute-0 nova_compute[348325]: 2025-12-03 19:13:03.898 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:13:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:07 compute-0 nova_compute[348325]: 2025-12-03 19:13:07.277 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:08 compute-0 nova_compute[348325]: 2025-12-03 19:13:08.902 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:13:09 compute-0 podman[468231]: 2025-12-03 19:13:09.985929835 +0000 UTC m=+0.133501015 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 19:13:10 compute-0 podman[468232]: 2025-12-03 19:13:10.006848363 +0000 UTC m=+0.147197791 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, version=9.6)
Dec  3 19:13:10 compute-0 podman[468230]: 2025-12-03 19:13:10.012950449 +0000 UTC m=+0.167031054 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 19:13:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:12 compute-0 nova_compute[348325]: 2025-12-03 19:13:12.282 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.259 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.260 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.281 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a4fc45c7-44e4-4b50-a3e0-98de13268f88', 'name': 'te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.287 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a364994c-8442-4a4c-bd6b-f3a2d31e4483', 'name': 'te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5', 'flavor': {'id': 'a94cfbfb-a20a-4689-ac91-e7436db75880', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '29e9e995-880d-46f8-bdd0-149d4e107ea9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'user_id': '5b5e6c2a7cce4e3b96611203def80123', 'hostId': 'd87badab98086e7cd0aaefe9beb8cbc86d59712043f354b2bb8c77be', 'status': 'active', 'metadata': {'metering.server_group': 'd721c97c-b9eb-44f9-a826-1b99239b172a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.287 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T19:13:13.288834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.298 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.306 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.308 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.309 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T19:13:13.309061) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.310 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.311 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.312 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T19:13:13.312550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.313 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.315 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.316 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.317 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T19:13:13.315989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.319 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.319 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T19:13:13.320118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.320 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.321 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.322 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.323 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T19:13:13.323214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.356 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.357 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.388 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.389 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.390 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.391 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.391 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.391 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.392 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.393 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.393 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.393 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T19:13:13.391545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.395 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T19:13:13.394201) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.433 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/memory.usage volume: 42.515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.480 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/memory.usage volume: 42.45703125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.482 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.483 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.483 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.484 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.485 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T19:13:13.482756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.486 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.486 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T19:13:13.486119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.487 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.487 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.488 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.489 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.489 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.489 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.490 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.490 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.490 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T19:13:13.490293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.551 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 30382592 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.552 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.616 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.bytes volume: 31209984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.616 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.618 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.618 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.619 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 1826201908 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.620 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.latency volume: 148336564 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T19:13:13.619256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.621 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.latency volume: 2412766472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.621 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.latency volume: 170015198 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.622 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.623 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.624 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.624 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.requests volume: 1142 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.625 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T19:13:13.623346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.627 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.628 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.628 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.629 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.629 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T19:13:13.627678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.630 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.631 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.632 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.632 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.633 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.633 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T19:13:13.631934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.634 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.635 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.635 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.636 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.636 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.638 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T19:13:13.635930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.639 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 9240506883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.640 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.640 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.latency volume: 7835856569 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.641 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.642 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T19:13:13.639067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.643 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 340 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.644 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.644 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.requests volume: 302 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T19:13:13.643557) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.645 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.646 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.647 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T19:13:13.647300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.648 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.649 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.649 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T19:13:13.650080) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.651 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.652 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.653 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T19:13:13.652340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.653 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.654 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.654 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.654 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.655 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.656 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T19:13:13.655079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T19:13:13.657370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.657 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.658 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.659 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.659 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.660 14 DEBUG ceilometer.compute.pollsters [-] a4fc45c7-44e4-4b50-a3e0-98de13268f88/cpu volume: 337900000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T19:13:13.660054) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.661 14 DEBUG ceilometer.compute.pollsters [-] a364994c-8442-4a4c-bd6b-f3a2d31e4483/cpu volume: 336550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:13:13.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:13:13 compute-0 nova_compute[348325]: 2025-12-03 19:13:13.906 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:13:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:13:14
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data']
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:13:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:13:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:13:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:16 compute-0 podman[468299]: 2025-12-03 19:13:16.927006744 +0000 UTC m=+0.078535789 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 19:13:16 compute-0 podman[468297]: 2025-12-03 19:13:16.95968313 +0000 UTC m=+0.105981001 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, container_name=kepler, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=)
Dec  3 19:13:16 compute-0 podman[468298]: 2025-12-03 19:13:16.964098815 +0000 UTC m=+0.115437805 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 19:13:17 compute-0 nova_compute[348325]: 2025-12-03 19:13:17.287 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:18 compute-0 nova_compute[348325]: 2025-12-03 19:13:18.908 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:13:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:22 compute-0 nova_compute[348325]: 2025-12-03 19:13:22.292 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:22 compute-0 podman[468351]: 2025-12-03 19:13:22.97211778 +0000 UTC m=+0.133487365 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:13:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:13:23.371 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:13:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:13:23.371 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:13:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:13:23.372 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:13:23 compute-0 nova_compute[348325]: 2025-12-03 19:13:23.911 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:13:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015216620474376506 of space, bias 1.0, pg target 0.4564986142312952 quantized to 32 (current 32)
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:13:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:13:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:27 compute-0 nova_compute[348325]: 2025-12-03 19:13:27.297 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:28 compute-0 nova_compute[348325]: 2025-12-03 19:13:28.756 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:13:28 compute-0 nova_compute[348325]: 2025-12-03 19:13:28.915 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:29 compute-0 nova_compute[348325]: 2025-12-03 19:13:29.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:13:29 compute-0 podman[158200]: time="2025-12-03T19:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:13:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:13:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:13:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec  3 19:13:29 compute-0 podman[468375]: 2025-12-03 19:13:29.986755848 +0000 UTC m=+0.140997884 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:13:30 compute-0 podman[468374]: 2025-12-03 19:13:30.0402701 +0000 UTC m=+0.192835645 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  3 19:13:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:31 compute-0 openstack_network_exporter[365222]: ERROR   19:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:13:31 compute-0 openstack_network_exporter[365222]: ERROR   19:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:13:31 compute-0 openstack_network_exporter[365222]: ERROR   19:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:13:31 compute-0 openstack_network_exporter[365222]: ERROR   19:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:13:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:13:31 compute-0 openstack_network_exporter[365222]: ERROR   19:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:13:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:13:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:32 compute-0 nova_compute[348325]: 2025-12-03 19:13:32.303 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:33 compute-0 nova_compute[348325]: 2025-12-03 19:13:33.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:13:33 compute-0 nova_compute[348325]: 2025-12-03 19:13:33.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:13:33 compute-0 nova_compute[348325]: 2025-12-03 19:13:33.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:13:33 compute-0 nova_compute[348325]: 2025-12-03 19:13:33.918 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:13:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:13:36 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:13:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:13:36 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:13:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:13:36 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:13:36 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 07bd74bf-2fb3-4cf6-94e8-83b539660e5b does not exist
Dec  3 19:13:36 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev fb9f2bff-45a4-469e-9cd2-f373014fbd1a does not exist
Dec  3 19:13:36 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev fcdbdcc5-ea3e-4e18-a9cd-46259fa29683 does not exist
Dec  3 19:13:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:13:36 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:13:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:13:36 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:13:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:13:36 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:13:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:13:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:13:36 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:13:37 compute-0 nova_compute[348325]: 2025-12-03 19:13:37.308 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:37 compute-0 nova_compute[348325]: 2025-12-03 19:13:37.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:13:37 compute-0 podman[468684]: 2025-12-03 19:13:37.698060899 +0000 UTC m=+0.058687847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:13:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:13:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1833138376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:13:37 compute-0 podman[468684]: 2025-12-03 19:13:37.856412004 +0000 UTC m=+0.217038892 container create 3a779133978b3573f8d96917a01312c2345f99c10e1b344fa2237f3958766c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 19:13:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:13:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1833138376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:13:38 compute-0 systemd[1]: Started libpod-conmon-3a779133978b3573f8d96917a01312c2345f99c10e1b344fa2237f3958766c25.scope.
Dec  3 19:13:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:13:38 compute-0 podman[468684]: 2025-12-03 19:13:38.190021616 +0000 UTC m=+0.550648574 container init 3a779133978b3573f8d96917a01312c2345f99c10e1b344fa2237f3958766c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:13:38 compute-0 podman[468684]: 2025-12-03 19:13:38.209074629 +0000 UTC m=+0.569701527 container start 3a779133978b3573f8d96917a01312c2345f99c10e1b344fa2237f3958766c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 19:13:38 compute-0 determined_chatterjee[468699]: 167 167
Dec  3 19:13:38 compute-0 systemd[1]: libpod-3a779133978b3573f8d96917a01312c2345f99c10e1b344fa2237f3958766c25.scope: Deactivated successfully.
Dec  3 19:13:38 compute-0 podman[468684]: 2025-12-03 19:13:38.245217698 +0000 UTC m=+0.605844636 container attach 3a779133978b3573f8d96917a01312c2345f99c10e1b344fa2237f3958766c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:13:38 compute-0 podman[468684]: 2025-12-03 19:13:38.245923075 +0000 UTC m=+0.606549963 container died 3a779133978b3573f8d96917a01312c2345f99c10e1b344fa2237f3958766c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 19:13:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc519ce02d86afa3d7e2a755e5f436e02266b721c06167313bca8b443fa31044-merged.mount: Deactivated successfully.
Dec  3 19:13:38 compute-0 nova_compute[348325]: 2025-12-03 19:13:38.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:13:38 compute-0 nova_compute[348325]: 2025-12-03 19:13:38.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:13:38 compute-0 nova_compute[348325]: 2025-12-03 19:13:38.921 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:38 compute-0 podman[468684]: 2025-12-03 19:13:38.933386031 +0000 UTC m=+1.294012929 container remove 3a779133978b3573f8d96917a01312c2345f99c10e1b344fa2237f3958766c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:13:38 compute-0 systemd[1]: libpod-conmon-3a779133978b3573f8d96917a01312c2345f99c10e1b344fa2237f3958766c25.scope: Deactivated successfully.
Dec  3 19:13:39 compute-0 podman[468726]: 2025-12-03 19:13:39.2378422 +0000 UTC m=+0.062859775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:13:39 compute-0 podman[468726]: 2025-12-03 19:13:39.351760659 +0000 UTC m=+0.176778184 container create 8959c74f99280b679f48baba21b232c04f774ca3e4364514e0b7c34db0122cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 19:13:39 compute-0 systemd[1]: Started libpod-conmon-8959c74f99280b679f48baba21b232c04f774ca3e4364514e0b7c34db0122cfc.scope.
Dec  3 19:13:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e279e679e3929be9f4a10725a40e46df508586a9d647278fbb60eeb041cce5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e279e679e3929be9f4a10725a40e46df508586a9d647278fbb60eeb041cce5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e279e679e3929be9f4a10725a40e46df508586a9d647278fbb60eeb041cce5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e279e679e3929be9f4a10725a40e46df508586a9d647278fbb60eeb041cce5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e279e679e3929be9f4a10725a40e46df508586a9d647278fbb60eeb041cce5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:39 compute-0 podman[468726]: 2025-12-03 19:13:39.747144949 +0000 UTC m=+0.572162534 container init 8959c74f99280b679f48baba21b232c04f774ca3e4364514e0b7c34db0122cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:13:39 compute-0 podman[468726]: 2025-12-03 19:13:39.767582006 +0000 UTC m=+0.592599531 container start 8959c74f99280b679f48baba21b232c04f774ca3e4364514e0b7c34db0122cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cohen, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 19:13:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:13:39 compute-0 podman[468726]: 2025-12-03 19:13:39.866383914 +0000 UTC m=+0.691401439 container attach 8959c74f99280b679f48baba21b232c04f774ca3e4364514e0b7c34db0122cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cohen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 19:13:39 compute-0 nova_compute[348325]: 2025-12-03 19:13:39.977 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 19:13:39 compute-0 nova_compute[348325]: 2025-12-03 19:13:39.978 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquired lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 19:13:39 compute-0 nova_compute[348325]: 2025-12-03 19:13:39.978 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 19:13:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:40 compute-0 podman[468762]: 2025-12-03 19:13:40.993133696 +0000 UTC m=+0.135833291 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 19:13:41 compute-0 podman[468761]: 2025-12-03 19:13:41.0080353 +0000 UTC m=+0.152400844 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Dec  3 19:13:41 compute-0 podman[468763]: 2025-12-03 19:13:41.012438785 +0000 UTC m=+0.147639762 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, config_id=edpm, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  3 19:13:41 compute-0 hardcore_cohen[468742]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:13:41 compute-0 hardcore_cohen[468742]: --> relative data size: 1.0
Dec  3 19:13:41 compute-0 hardcore_cohen[468742]: --> All data devices are unavailable
Dec  3 19:13:41 compute-0 systemd[1]: libpod-8959c74f99280b679f48baba21b232c04f774ca3e4364514e0b7c34db0122cfc.scope: Deactivated successfully.
Dec  3 19:13:41 compute-0 podman[468726]: 2025-12-03 19:13:41.173070314 +0000 UTC m=+1.998087810 container died 8959c74f99280b679f48baba21b232c04f774ca3e4364514e0b7c34db0122cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cohen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:13:41 compute-0 systemd[1]: libpod-8959c74f99280b679f48baba21b232c04f774ca3e4364514e0b7c34db0122cfc.scope: Consumed 1.329s CPU time.
Dec  3 19:13:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-08e279e679e3929be9f4a10725a40e46df508586a9d647278fbb60eeb041cce5-merged.mount: Deactivated successfully.
Dec  3 19:13:41 compute-0 nova_compute[348325]: 2025-12-03 19:13:41.911 348329 DEBUG nova.network.neutron [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updating instance_info_cache with network_info: [{"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:13:41 compute-0 nova_compute[348325]: 2025-12-03 19:13:41.972 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Releasing lock "refresh_cache-a364994c-8442-4a4c-bd6b-f3a2d31e4483" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 19:13:41 compute-0 nova_compute[348325]: 2025-12-03 19:13:41.972 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 19:13:42 compute-0 podman[468726]: 2025-12-03 19:13:42.003865438 +0000 UTC m=+2.828882953 container remove 8959c74f99280b679f48baba21b232c04f774ca3e4364514e0b7c34db0122cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:13:42 compute-0 systemd[1]: libpod-conmon-8959c74f99280b679f48baba21b232c04f774ca3e4364514e0b7c34db0122cfc.scope: Deactivated successfully.
Dec  3 19:13:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:42 compute-0 nova_compute[348325]: 2025-12-03 19:13:42.314 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:43 compute-0 podman[468978]: 2025-12-03 19:13:43.313972869 +0000 UTC m=+0.055717286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:13:43 compute-0 podman[468978]: 2025-12-03 19:13:43.441784878 +0000 UTC m=+0.183529235 container create 7d611ae853e3e851e04c22bd8642d89f7941675019be526d95c80ea083e3313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 19:13:43 compute-0 nova_compute[348325]: 2025-12-03 19:13:43.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:13:43 compute-0 nova_compute[348325]: 2025-12-03 19:13:43.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:13:43 compute-0 systemd[1]: Started libpod-conmon-7d611ae853e3e851e04c22bd8642d89f7941675019be526d95c80ea083e3313d.scope.
Dec  3 19:13:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:13:43 compute-0 podman[468978]: 2025-12-03 19:13:43.794779401 +0000 UTC m=+0.536523808 container init 7d611ae853e3e851e04c22bd8642d89f7941675019be526d95c80ea083e3313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swanson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:13:43 compute-0 podman[468978]: 2025-12-03 19:13:43.811296243 +0000 UTC m=+0.553040600 container start 7d611ae853e3e851e04c22bd8642d89f7941675019be526d95c80ea083e3313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swanson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 19:13:43 compute-0 exciting_swanson[468994]: 167 167
Dec  3 19:13:43 compute-0 systemd[1]: libpod-7d611ae853e3e851e04c22bd8642d89f7941675019be526d95c80ea083e3313d.scope: Deactivated successfully.
Dec  3 19:13:43 compute-0 nova_compute[348325]: 2025-12-03 19:13:43.925 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:43 compute-0 podman[468978]: 2025-12-03 19:13:43.946747244 +0000 UTC m=+0.688491651 container attach 7d611ae853e3e851e04c22bd8642d89f7941675019be526d95c80ea083e3313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 19:13:43 compute-0 podman[468978]: 2025-12-03 19:13:43.947960662 +0000 UTC m=+0.689705019 container died 7d611ae853e3e851e04c22bd8642d89f7941675019be526d95c80ea083e3313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swanson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:13:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:13:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:13:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:13:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:13:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:13:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad055c1bf511d6173d347444575f8c5e1124a943753937fbbdd9090b2bae6c26-merged.mount: Deactivated successfully.
Dec  3 19:13:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:44 compute-0 podman[468978]: 2025-12-03 19:13:44.462782693 +0000 UTC m=+1.204527040 container remove 7d611ae853e3e851e04c22bd8642d89f7941675019be526d95c80ea083e3313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 19:13:44 compute-0 systemd[1]: libpod-conmon-7d611ae853e3e851e04c22bd8642d89f7941675019be526d95c80ea083e3313d.scope: Deactivated successfully.
Dec  3 19:13:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:13:44 compute-0 podman[469019]: 2025-12-03 19:13:44.848567607 +0000 UTC m=+0.121325806 container create b12e187a397b88b2d07e050f4ce2b105a078a252ce88e99551b88cb4d8e6fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:13:44 compute-0 podman[469019]: 2025-12-03 19:13:44.774662199 +0000 UTC m=+0.047420368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:13:45 compute-0 systemd[1]: Started libpod-conmon-b12e187a397b88b2d07e050f4ce2b105a078a252ce88e99551b88cb4d8e6fb07.scope.
Dec  3 19:13:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4b57212c5a7af985ebeeeac98977ccce09296beb8f0346ad09f054aa5007ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4b57212c5a7af985ebeeeac98977ccce09296beb8f0346ad09f054aa5007ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4b57212c5a7af985ebeeeac98977ccce09296beb8f0346ad09f054aa5007ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4b57212c5a7af985ebeeeac98977ccce09296beb8f0346ad09f054aa5007ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:45 compute-0 podman[469019]: 2025-12-03 19:13:45.232684589 +0000 UTC m=+0.505442838 container init b12e187a397b88b2d07e050f4ce2b105a078a252ce88e99551b88cb4d8e6fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:13:45 compute-0 podman[469019]: 2025-12-03 19:13:45.250781949 +0000 UTC m=+0.523540128 container start b12e187a397b88b2d07e050f4ce2b105a078a252ce88e99551b88cb4d8e6fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:13:45 compute-0 podman[469019]: 2025-12-03 19:13:45.35255707 +0000 UTC m=+0.625315309 container attach b12e187a397b88b2d07e050f4ce2b105a078a252ce88e99551b88cb4d8e6fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]: {
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:    "0": [
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:        {
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "devices": [
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "/dev/loop3"
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            ],
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_name": "ceph_lv0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_size": "21470642176",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "name": "ceph_lv0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "tags": {
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.cluster_name": "ceph",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.crush_device_class": "",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.encrypted": "0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.osd_id": "0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.type": "block",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.vdo": "0"
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            },
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "type": "block",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "vg_name": "ceph_vg0"
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:        }
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:    ],
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:    "1": [
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:        {
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "devices": [
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "/dev/loop4"
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            ],
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_name": "ceph_lv1",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_size": "21470642176",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "name": "ceph_lv1",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "tags": {
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.cluster_name": "ceph",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.crush_device_class": "",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.encrypted": "0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.osd_id": "1",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.type": "block",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.vdo": "0"
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            },
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "type": "block",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "vg_name": "ceph_vg1"
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:        }
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:    ],
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:    "2": [
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:        {
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "devices": [
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "/dev/loop5"
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            ],
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_name": "ceph_lv2",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_size": "21470642176",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "name": "ceph_lv2",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "tags": {
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.cluster_name": "ceph",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.crush_device_class": "",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.encrypted": "0",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.osd_id": "2",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.type": "block",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:                "ceph.vdo": "0"
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            },
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "type": "block",
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:            "vg_name": "ceph_vg2"
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:        }
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]:    ]
Dec  3 19:13:46 compute-0 wizardly_hamilton[469036]: }
Dec  3 19:13:46 compute-0 systemd[1]: libpod-b12e187a397b88b2d07e050f4ce2b105a078a252ce88e99551b88cb4d8e6fb07.scope: Deactivated successfully.
Dec  3 19:13:46 compute-0 podman[469019]: 2025-12-03 19:13:46.13864801 +0000 UTC m=+1.411406189 container died b12e187a397b88b2d07e050f4ce2b105a078a252ce88e99551b88cb4d8e6fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 19:13:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f4b57212c5a7af985ebeeeac98977ccce09296beb8f0346ad09f054aa5007ae-merged.mount: Deactivated successfully.
Dec  3 19:13:46 compute-0 podman[469019]: 2025-12-03 19:13:46.608193105 +0000 UTC m=+1.880951284 container remove b12e187a397b88b2d07e050f4ce2b105a078a252ce88e99551b88cb4d8e6fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec  3 19:13:46 compute-0 systemd[1]: libpod-conmon-b12e187a397b88b2d07e050f4ce2b105a078a252ce88e99551b88cb4d8e6fb07.scope: Deactivated successfully.
Dec  3 19:13:47 compute-0 podman[469109]: 2025-12-03 19:13:47.179295613 +0000 UTC m=+0.118269932 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 19:13:47 compute-0 podman[469108]: 2025-12-03 19:13:47.190653713 +0000 UTC m=+0.140604024 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 19:13:47 compute-0 podman[469107]: 2025-12-03 19:13:47.202786052 +0000 UTC m=+0.153977582 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30)
Dec  3 19:13:47 compute-0 nova_compute[348325]: 2025-12-03 19:13:47.317 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:47 compute-0 podman[469251]: 2025-12-03 19:13:47.894922228 +0000 UTC m=+0.069928313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:13:48 compute-0 podman[469251]: 2025-12-03 19:13:48.035422529 +0000 UTC m=+0.210428554 container create 4c00b03080da6fb2f98b67eec23cc81ed1667f17b8f08e104dd11a653eb1a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:13:48 compute-0 systemd[1]: Started libpod-conmon-4c00b03080da6fb2f98b67eec23cc81ed1667f17b8f08e104dd11a653eb1a054.scope.
Dec  3 19:13:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:13:48 compute-0 podman[469251]: 2025-12-03 19:13:48.297938341 +0000 UTC m=+0.472944406 container init 4c00b03080da6fb2f98b67eec23cc81ed1667f17b8f08e104dd11a653eb1a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:13:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:48 compute-0 podman[469251]: 2025-12-03 19:13:48.315350735 +0000 UTC m=+0.490356750 container start 4c00b03080da6fb2f98b67eec23cc81ed1667f17b8f08e104dd11a653eb1a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:13:48 compute-0 trusting_bouman[469267]: 167 167
Dec  3 19:13:48 compute-0 systemd[1]: libpod-4c00b03080da6fb2f98b67eec23cc81ed1667f17b8f08e104dd11a653eb1a054.scope: Deactivated successfully.
Dec  3 19:13:48 compute-0 podman[469251]: 2025-12-03 19:13:48.387112111 +0000 UTC m=+0.562118126 container attach 4c00b03080da6fb2f98b67eec23cc81ed1667f17b8f08e104dd11a653eb1a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 19:13:48 compute-0 podman[469251]: 2025-12-03 19:13:48.388434733 +0000 UTC m=+0.563440728 container died 4c00b03080da6fb2f98b67eec23cc81ed1667f17b8f08e104dd11a653eb1a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-871fe15f19577d1a8de37994f890d5c9ec2f8a09a9cb38691e29bd2f9d108804-merged.mount: Deactivated successfully.
Dec  3 19:13:48 compute-0 podman[469251]: 2025-12-03 19:13:48.788998277 +0000 UTC m=+0.964004302 container remove 4c00b03080da6fb2f98b67eec23cc81ed1667f17b8f08e104dd11a653eb1a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 19:13:48 compute-0 systemd[1]: libpod-conmon-4c00b03080da6fb2f98b67eec23cc81ed1667f17b8f08e104dd11a653eb1a054.scope: Deactivated successfully.
Dec  3 19:13:48 compute-0 nova_compute[348325]: 2025-12-03 19:13:48.927 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:49 compute-0 podman[469290]: 2025-12-03 19:13:49.210061289 +0000 UTC m=+0.147876688 container create a955964969100505ad48028aa07f14417eb21237b70873138d704a702dcf8ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 19:13:49 compute-0 podman[469290]: 2025-12-03 19:13:49.128745675 +0000 UTC m=+0.066561124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:13:49 compute-0 systemd[1]: Started libpod-conmon-a955964969100505ad48028aa07f14417eb21237b70873138d704a702dcf8ca9.scope.
Dec  3 19:13:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298a1e869cbd89d019412dd74efef8adfb25be4f82b9702cad206595d08e8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298a1e869cbd89d019412dd74efef8adfb25be4f82b9702cad206595d08e8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298a1e869cbd89d019412dd74efef8adfb25be4f82b9702cad206595d08e8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298a1e869cbd89d019412dd74efef8adfb25be4f82b9702cad206595d08e8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:13:49 compute-0 podman[469290]: 2025-12-03 19:13:49.453633331 +0000 UTC m=+0.391448760 container init a955964969100505ad48028aa07f14417eb21237b70873138d704a702dcf8ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 19:13:49 compute-0 podman[469290]: 2025-12-03 19:13:49.473199876 +0000 UTC m=+0.411015275 container start a955964969100505ad48028aa07f14417eb21237b70873138d704a702dcf8ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:13:49 compute-0 podman[469290]: 2025-12-03 19:13:49.523237905 +0000 UTC m=+0.461053324 container attach a955964969100505ad48028aa07f14417eb21237b70873138d704a702dcf8ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 19:13:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:13:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]: {
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "osd_id": 1,
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "type": "bluestore"
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:    },
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "osd_id": 2,
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "type": "bluestore"
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:    },
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "osd_id": 0,
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:        "type": "bluestore"
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]:    }
Dec  3 19:13:50 compute-0 gracious_matsumoto[469306]: }
Dec  3 19:13:50 compute-0 systemd[1]: libpod-a955964969100505ad48028aa07f14417eb21237b70873138d704a702dcf8ca9.scope: Deactivated successfully.
Dec  3 19:13:50 compute-0 podman[469290]: 2025-12-03 19:13:50.650352545 +0000 UTC m=+1.588167934 container died a955964969100505ad48028aa07f14417eb21237b70873138d704a702dcf8ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:13:50 compute-0 systemd[1]: libpod-a955964969100505ad48028aa07f14417eb21237b70873138d704a702dcf8ca9.scope: Consumed 1.182s CPU time.
Dec  3 19:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-01298a1e869cbd89d019412dd74efef8adfb25be4f82b9702cad206595d08e8a-merged.mount: Deactivated successfully.
Dec  3 19:13:50 compute-0 podman[469290]: 2025-12-03 19:13:50.753247912 +0000 UTC m=+1.691063311 container remove a955964969100505ad48028aa07f14417eb21237b70873138d704a702dcf8ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:13:50 compute-0 systemd[1]: libpod-conmon-a955964969100505ad48028aa07f14417eb21237b70873138d704a702dcf8ca9.scope: Deactivated successfully.
Dec  3 19:13:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:13:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:13:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:13:50 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:13:50 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev fccc6e81-9aef-4ca4-b08b-e6c91917dd3d does not exist
Dec  3 19:13:50 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e3d148f6-62a6-4c62-8e90-395a62f5f875 does not exist
Dec  3 19:13:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:13:51 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:13:51 compute-0 nova_compute[348325]: 2025-12-03 19:13:51.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:13:51 compute-0 nova_compute[348325]: 2025-12-03 19:13:51.714 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:13:51 compute-0 nova_compute[348325]: 2025-12-03 19:13:51.714 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:13:51 compute-0 nova_compute[348325]: 2025-12-03 19:13:51.715 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:13:51 compute-0 nova_compute[348325]: 2025-12-03 19:13:51.716 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:13:51 compute-0 nova_compute[348325]: 2025-12-03 19:13:51.717 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:13:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:13:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078915972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:13:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:52 compute-0 nova_compute[348325]: 2025-12-03 19:13:52.310 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:13:52 compute-0 nova_compute[348325]: 2025-12-03 19:13:52.319 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:52 compute-0 nova_compute[348325]: 2025-12-03 19:13:52.488 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:13:52 compute-0 nova_compute[348325]: 2025-12-03 19:13:52.489 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:13:52 compute-0 nova_compute[348325]: 2025-12-03 19:13:52.494 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:13:52 compute-0 nova_compute[348325]: 2025-12-03 19:13:52.494 348329 DEBUG nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.497110) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789232497220, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1826, "num_deletes": 251, "total_data_size": 2992431, "memory_usage": 3042288, "flush_reason": "Manual Compaction"}
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789232523662, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 2930173, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44499, "largest_seqno": 46324, "table_properties": {"data_size": 2921761, "index_size": 5162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16895, "raw_average_key_size": 19, "raw_value_size": 2905102, "raw_average_value_size": 3437, "num_data_blocks": 230, "num_entries": 845, "num_filter_entries": 845, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764789035, "oldest_key_time": 1764789035, "file_creation_time": 1764789232, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 26645 microseconds, and 15004 cpu microseconds.
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.523752) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 2930173 bytes OK
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.523780) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.527236) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.527349) EVENT_LOG_v1 {"time_micros": 1764789232527253, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.527377) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2984678, prev total WAL file size 2984678, number of live WAL files 2.
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.529612) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(2861KB)], [107(6535KB)]
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789232529747, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9622620, "oldest_snapshot_seqno": -1}
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6115 keys, 7876315 bytes, temperature: kUnknown
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789232606365, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 7876315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7838352, "index_size": 21597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 159138, "raw_average_key_size": 26, "raw_value_size": 7730657, "raw_average_value_size": 1264, "num_data_blocks": 856, "num_entries": 6115, "num_filter_entries": 6115, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764789232, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.606745) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 7876315 bytes
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.609830) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.3 rd, 102.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 6.4 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 6629, records dropped: 514 output_compression: NoCompression
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.609860) EVENT_LOG_v1 {"time_micros": 1764789232609846, "job": 64, "event": "compaction_finished", "compaction_time_micros": 76781, "compaction_time_cpu_micros": 41617, "output_level": 6, "num_output_files": 1, "total_output_size": 7876315, "num_input_records": 6629, "num_output_records": 6115, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789232610982, "job": 64, "event": "table_file_deletion", "file_number": 109}
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789232613994, "job": 64, "event": "table_file_deletion", "file_number": 107}
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.529221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.614532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.614542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.614545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.614548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:13:52 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:13:52.614551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:13:53 compute-0 nova_compute[348325]: 2025-12-03 19:13:53.091 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:13:53 compute-0 nova_compute[348325]: 2025-12-03 19:13:53.093 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3466MB free_disk=59.89699935913086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:13:53 compute-0 nova_compute[348325]: 2025-12-03 19:13:53.094 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:13:53 compute-0 nova_compute[348325]: 2025-12-03 19:13:53.094 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:13:53 compute-0 nova_compute[348325]: 2025-12-03 19:13:53.444 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a4fc45c7-44e4-4b50-a3e0-98de13268f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:13:53 compute-0 nova_compute[348325]: 2025-12-03 19:13:53.445 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Instance a364994c-8442-4a4c-bd6b-f3a2d31e4483 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 19:13:53 compute-0 nova_compute[348325]: 2025-12-03 19:13:53.446 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:13:53 compute-0 nova_compute[348325]: 2025-12-03 19:13:53.447 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:13:53 compute-0 nova_compute[348325]: 2025-12-03 19:13:53.515 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:13:53 compute-0 nova_compute[348325]: 2025-12-03 19:13:53.930 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:13:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496241034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:13:53 compute-0 podman[469445]: 2025-12-03 19:13:53.98316175 +0000 UTC m=+0.128255681 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:13:53 compute-0 nova_compute[348325]: 2025-12-03 19:13:53.995 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:13:54 compute-0 nova_compute[348325]: 2025-12-03 19:13:54.012 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:13:54 compute-0 nova_compute[348325]: 2025-12-03 19:13:54.056 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:13:54 compute-0 nova_compute[348325]: 2025-12-03 19:13:54.060 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:13:54 compute-0 nova_compute[348325]: 2025-12-03 19:13:54.060 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:13:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:13:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:57 compute-0 nova_compute[348325]: 2025-12-03 19:13:57.322 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:13:58 compute-0 nova_compute[348325]: 2025-12-03 19:13:58.933 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:13:59 compute-0 podman[158200]: time="2025-12-03T19:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:13:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:13:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8656 "" "Go-http-client/1.1"
Dec  3 19:13:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:14:00 compute-0 podman[469476]: 2025-12-03 19:14:00.938248323 +0000 UTC m=+0.091913636 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  3 19:14:00 compute-0 podman[469475]: 2025-12-03 19:14:00.955196936 +0000 UTC m=+0.120502756 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 19:14:01 compute-0 openstack_network_exporter[365222]: ERROR   19:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:14:01 compute-0 openstack_network_exporter[365222]: ERROR   19:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:14:01 compute-0 openstack_network_exporter[365222]: ERROR   19:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:14:01 compute-0 openstack_network_exporter[365222]: ERROR   19:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:14:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:14:01 compute-0 openstack_network_exporter[365222]: ERROR   19:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:14:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:14:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:14:02 compute-0 nova_compute[348325]: 2025-12-03 19:14:02.325 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:03 compute-0 nova_compute[348325]: 2025-12-03 19:14:03.935 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:14:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:14:07 compute-0 nova_compute[348325]: 2025-12-03 19:14:07.328 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:14:08 compute-0 nova_compute[348325]: 2025-12-03 19:14:08.939 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:14:11 compute-0 podman[469521]: 2025-12-03 19:14:11.985536386 +0000 UTC m=+0.124153642 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 19:14:11 compute-0 podman[469520]: 2025-12-03 19:14:11.985146867 +0000 UTC m=+0.129406907 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 19:14:12 compute-0 podman[469522]: 2025-12-03 19:14:12.030983087 +0000 UTC m=+0.164730898 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Dec  3 19:14:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:14:12 compute-0 nova_compute[348325]: 2025-12-03 19:14:12.333 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:13 compute-0 nova_compute[348325]: 2025-12-03 19:14:13.941 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:14:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:14:14
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.meta', 'default.rgw.log', 'images', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.data']
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:14:14 compute-0 nova_compute[348325]: 2025-12-03 19:14:14.557 348329 DEBUG oslo_concurrency.lockutils [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:14 compute-0 nova_compute[348325]: 2025-12-03 19:14:14.558 348329 DEBUG oslo_concurrency.lockutils [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:14 compute-0 nova_compute[348325]: 2025-12-03 19:14:14.559 348329 DEBUG oslo_concurrency.lockutils [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:14 compute-0 nova_compute[348325]: 2025-12-03 19:14:14.561 348329 DEBUG oslo_concurrency.lockutils [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:14 compute-0 nova_compute[348325]: 2025-12-03 19:14:14.562 348329 DEBUG oslo_concurrency.lockutils [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:14 compute-0 nova_compute[348325]: 2025-12-03 19:14:14.565 348329 INFO nova.compute.manager [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Terminating instance#033[00m
Dec  3 19:14:14 compute-0 nova_compute[348325]: 2025-12-03 19:14:14.567 348329 DEBUG nova.compute.manager [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 19:14:14 compute-0 kernel: tapcf729fa8-95 (unregistering): left promiscuous mode
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:14:14 compute-0 NetworkManager[49087]: <info>  [1764789254.7168] device (tapcf729fa8-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:14:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:14:14 compute-0 nova_compute[348325]: 2025-12-03 19:14:14.745 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:14 compute-0 ovn_controller[89305]: 2025-12-03T19:14:14Z|00180|binding|INFO|Releasing lport cf729fa8-9549-4bf2-9858-7e8de773e1bc from this chassis (sb_readonly=0)
Dec  3 19:14:14 compute-0 ovn_controller[89305]: 2025-12-03T19:14:14Z|00181|binding|INFO|Setting lport cf729fa8-9549-4bf2-9858-7e8de773e1bc down in Southbound
Dec  3 19:14:14 compute-0 ovn_controller[89305]: 2025-12-03T19:14:14Z|00182|binding|INFO|Removing iface tapcf729fa8-95 ovn-installed in OVS
Dec  3 19:14:14 compute-0 nova_compute[348325]: 2025-12-03 19:14:14.756 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:14 compute-0 nova_compute[348325]: 2025-12-03 19:14:14.770 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:14 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec  3 19:14:14 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 7min 28.329s CPU time.
Dec  3 19:14:14 compute-0 systemd-machined[138702]: Machine qemu-13-instance-0000000c terminated.
Dec  3 19:14:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:14.865 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:91:4c 10.100.3.160'], port_security=['fa:16:3e:8d:91:4c 10.100.3.160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.160/16', 'neutron:device_id': 'a4fc45c7-44e4-4b50-a3e0-98de13268f88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04e258c0-609e-4010-a306-af20506c3a9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e47f9e7-514d-4fc2-9225-d05512482dee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b71f2b6d-7f9c-430c-a162-af2bdc131d68, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=cf729fa8-9549-4bf2-9858-7e8de773e1bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:14:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:14.867 286999 INFO neutron.agent.ovn.metadata.agent [-] Port cf729fa8-9549-4bf2-9858-7e8de773e1bc in datapath 04e258c0-609e-4010-a306-af20506c3a9d unbound from our chassis#033[00m
Dec  3 19:14:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:14.869 286999 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 04e258c0-609e-4010-a306-af20506c3a9d#033[00m
Dec  3 19:14:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:14.908 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[e06b7322-31a2-42a6-ac86-4bf52ae504f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:14.971 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3f6753-7df2-4bd7-9953-f7402d38edd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:14 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:14.979 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[d98eaa94-4103-45c5-8b16-6cc7769f58e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.009 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.023 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.032 348329 INFO nova.virt.libvirt.driver [-] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Instance destroyed successfully.#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.033 348329 DEBUG nova.objects.instance [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lazy-loading 'resources' on Instance uuid a4fc45c7-44e4-4b50-a3e0-98de13268f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:14:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:15.035 411797 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0dddb8-8e3c-4b2b-9d88-a7981e8d705e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.061 348329 DEBUG nova.virt.libvirt.vif [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T18:59:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0714371-asg-eacwc356yfed-wjjibmhqaqmp-wkbbxaqu3pya',id=12,image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T18:59:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d721c97c-b9eb-44f9-a826-1b99239b172a'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d29cef7b24ee4d30b2b3f5027ec6aafb',ramdisk_id='',reservation_id='r-58mkdhyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-463817161',owner_user_name='tempest-PrometheusGabbiTest-463817161-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T18:59:46Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5b5e6c2a7cce4e3b96611203def80123',uuid=a4fc45c7-44e4-4b50-a3e0-98de13268f88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.062 348329 DEBUG nova.network.os_vif_util [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converting VIF {"id": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "address": "fa:16:3e:8d:91:4c", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.160", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf729fa8-95", "ovs_interfaceid": "cf729fa8-9549-4bf2-9858-7e8de773e1bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.065 348329 DEBUG nova.network.os_vif_util [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=cf729fa8-9549-4bf2-9858-7e8de773e1bc,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf729fa8-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.066 348329 DEBUG os_vif [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=cf729fa8-9549-4bf2-9858-7e8de773e1bc,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf729fa8-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.069 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.070 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf729fa8-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.077 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:15.076 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e39824-1730-4f3e-b5b9-c637cdf224e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap04e258c0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5b:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666585, 'reachable_time': 42149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 469600, 'error': None, 'target': 'ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.086 348329 INFO os_vif [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=cf729fa8-9549-4bf2-9858-7e8de773e1bc,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf729fa8-95')#033[00m
Dec  3 19:14:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:15.101 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[da5a0801-9d72-4b05-8c52-c1c159e804b2]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap04e258c0-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666603, 'tstamp': 666603}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 469602, 'error': None, 'target': 'ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap04e258c0-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666607, 'tstamp': 666607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 469602, 'error': None, 'target': 'ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:15.103 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04e258c0-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:14:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:15.109 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap04e258c0-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:14:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:15.111 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:14:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:15.112 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap04e258c0-60, col_values=(('external_ids', {'iface-id': 'f82febe8-1e88-4e67-9f7a-5af5921c9877'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:14:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:15.113 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.129 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:15.210 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.212 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:15 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:15.213 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.268 348329 DEBUG nova.compute.manager [req-faf71670-9b4a-46a3-8662-b4b4d8c5246b req-615d704f-2f97-4922-8314-96cd76d76109 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Received event network-vif-unplugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.268 348329 DEBUG oslo_concurrency.lockutils [req-faf71670-9b4a-46a3-8662-b4b4d8c5246b req-615d704f-2f97-4922-8314-96cd76d76109 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.269 348329 DEBUG oslo_concurrency.lockutils [req-faf71670-9b4a-46a3-8662-b4b4d8c5246b req-615d704f-2f97-4922-8314-96cd76d76109 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.269 348329 DEBUG oslo_concurrency.lockutils [req-faf71670-9b4a-46a3-8662-b4b4d8c5246b req-615d704f-2f97-4922-8314-96cd76d76109 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.269 348329 DEBUG nova.compute.manager [req-faf71670-9b4a-46a3-8662-b4b4d8c5246b req-615d704f-2f97-4922-8314-96cd76d76109 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] No waiting events found dispatching network-vif-unplugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:14:15 compute-0 nova_compute[348325]: 2025-12-03 19:14:15.270 348329 DEBUG nova.compute.manager [req-faf71670-9b4a-46a3-8662-b4b4d8c5246b req-615d704f-2f97-4922-8314-96cd76d76109 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Received event network-vif-unplugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 19:14:16 compute-0 nova_compute[348325]: 2025-12-03 19:14:16.027 348329 INFO nova.virt.libvirt.driver [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Deleting instance files /var/lib/nova/instances/a4fc45c7-44e4-4b50-a3e0-98de13268f88_del#033[00m
Dec  3 19:14:16 compute-0 nova_compute[348325]: 2025-12-03 19:14:16.028 348329 INFO nova.virt.libvirt.driver [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Deletion of /var/lib/nova/instances/a4fc45c7-44e4-4b50-a3e0-98de13268f88_del complete#033[00m
Dec  3 19:14:16 compute-0 nova_compute[348325]: 2025-12-03 19:14:16.087 348329 INFO nova.compute.manager [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Took 1.52 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 19:14:16 compute-0 nova_compute[348325]: 2025-12-03 19:14:16.088 348329 DEBUG oslo.service.loopingcall [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 19:14:16 compute-0 nova_compute[348325]: 2025-12-03 19:14:16.089 348329 DEBUG nova.compute.manager [-] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 19:14:16 compute-0 nova_compute[348325]: 2025-12-03 19:14:16.090 348329 DEBUG nova.network.neutron [-] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 19:14:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 236 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  3 19:14:17 compute-0 nova_compute[348325]: 2025-12-03 19:14:17.835 348329 DEBUG nova.compute.manager [req-ce74c916-519a-481f-9f12-2159b5ac1efc req-73e5dc87-cc62-4453-a550-17f6537b0513 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Received event network-vif-plugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:14:17 compute-0 nova_compute[348325]: 2025-12-03 19:14:17.836 348329 DEBUG oslo_concurrency.lockutils [req-ce74c916-519a-481f-9f12-2159b5ac1efc req-73e5dc87-cc62-4453-a550-17f6537b0513 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:17 compute-0 nova_compute[348325]: 2025-12-03 19:14:17.836 348329 DEBUG oslo_concurrency.lockutils [req-ce74c916-519a-481f-9f12-2159b5ac1efc req-73e5dc87-cc62-4453-a550-17f6537b0513 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:17 compute-0 nova_compute[348325]: 2025-12-03 19:14:17.836 348329 DEBUG oslo_concurrency.lockutils [req-ce74c916-519a-481f-9f12-2159b5ac1efc req-73e5dc87-cc62-4453-a550-17f6537b0513 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:17 compute-0 nova_compute[348325]: 2025-12-03 19:14:17.837 348329 DEBUG nova.compute.manager [req-ce74c916-519a-481f-9f12-2159b5ac1efc req-73e5dc87-cc62-4453-a550-17f6537b0513 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] No waiting events found dispatching network-vif-plugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:14:17 compute-0 nova_compute[348325]: 2025-12-03 19:14:17.837 348329 WARNING nova.compute.manager [req-ce74c916-519a-481f-9f12-2159b5ac1efc req-73e5dc87-cc62-4453-a550-17f6537b0513 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Received unexpected event network-vif-plugged-cf729fa8-9549-4bf2-9858-7e8de773e1bc for instance with vm_state active and task_state deleting.#033[00m
Dec  3 19:14:17 compute-0 podman[469624]: 2025-12-03 19:14:17.974731463 +0000 UTC m=+0.112294781 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:14:17 compute-0 podman[469622]: 2025-12-03 19:14:17.998986529 +0000 UTC m=+0.147334094 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 19:14:18 compute-0 podman[469623]: 2025-12-03 19:14:18.003788224 +0000 UTC m=+0.153297537 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 19:14:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 182 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Dec  3 19:14:18 compute-0 nova_compute[348325]: 2025-12-03 19:14:18.946 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:19 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:19.217 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:14:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:14:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2935 syncs, 3.61 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 414 writes, 1101 keys, 414 commit groups, 1.0 writes per commit group, ingest: 0.80 MB, 0.00 MB/s#012Interval WAL: 414 writes, 184 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:14:19 compute-0 nova_compute[348325]: 2025-12-03 19:14:19.758 348329 DEBUG nova.network.neutron [-] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:14:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:19 compute-0 nova_compute[348325]: 2025-12-03 19:14:19.819 348329 INFO nova.compute.manager [-] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Took 3.73 seconds to deallocate network for instance.#033[00m
Dec  3 19:14:19 compute-0 nova_compute[348325]: 2025-12-03 19:14:19.866 348329 DEBUG nova.compute.manager [req-37b29be7-eff0-4a6a-b034-21e3a9235c1f req-610f83ef-f19d-43c0-b6ed-155cc0b27225 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Received event network-vif-deleted-cf729fa8-9549-4bf2-9858-7e8de773e1bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:14:19 compute-0 nova_compute[348325]: 2025-12-03 19:14:19.875 348329 DEBUG oslo_concurrency.lockutils [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:19 compute-0 nova_compute[348325]: 2025-12-03 19:14:19.876 348329 DEBUG oslo_concurrency.lockutils [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:19 compute-0 nova_compute[348325]: 2025-12-03 19:14:19.967 348329 DEBUG oslo_concurrency.processutils [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:14:20 compute-0 nova_compute[348325]: 2025-12-03 19:14:20.075 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:14:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:14:20 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826302736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:14:20 compute-0 nova_compute[348325]: 2025-12-03 19:14:20.509 348329 DEBUG oslo_concurrency.processutils [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:14:20 compute-0 nova_compute[348325]: 2025-12-03 19:14:20.522 348329 DEBUG nova.compute.provider_tree [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:14:20 compute-0 nova_compute[348325]: 2025-12-03 19:14:20.554 348329 DEBUG nova.scheduler.client.report [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:14:20 compute-0 nova_compute[348325]: 2025-12-03 19:14:20.588 348329 DEBUG oslo_concurrency.lockutils [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:21 compute-0 nova_compute[348325]: 2025-12-03 19:14:21.057 348329 INFO nova.scheduler.client.report [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Deleted allocations for instance a4fc45c7-44e4-4b50-a3e0-98de13268f88#033[00m
Dec  3 19:14:21 compute-0 nova_compute[348325]: 2025-12-03 19:14:21.145 348329 DEBUG oslo_concurrency.lockutils [None req-e75ac355-e92d-47eb-b65b-6782d769e286 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a4fc45c7-44e4-4b50-a3e0-98de13268f88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:14:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:23.372 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:23.373 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:23.374 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:23 compute-0 nova_compute[348325]: 2025-12-03 19:14:23.950 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:14:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007608628190727356 of space, bias 1.0, pg target 0.22825884572182067 quantized to 32 (current 32)
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:14:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:14:24 compute-0 podman[469700]: 2025-12-03 19:14:24.972030598 +0000 UTC m=+0.122685037 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:14:25 compute-0 nova_compute[348325]: 2025-12-03 19:14:25.080 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:14:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:14:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2961 syncs, 3.62 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 399 writes, 1169 keys, 399 commit groups, 1.0 writes per commit group, ingest: 1.12 MB, 0.00 MB/s#012Interval WAL: 399 writes, 181 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:14:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:14:28 compute-0 nova_compute[348325]: 2025-12-03 19:14:28.953 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:29 compute-0 podman[158200]: time="2025-12-03T19:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:14:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 19:14:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8653 "" "Go-http-client/1.1"
Dec  3 19:14:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:30 compute-0 nova_compute[348325]: 2025-12-03 19:14:30.026 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764789255.0225523, a4fc45c7-44e4-4b50-a3e0-98de13268f88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:14:30 compute-0 nova_compute[348325]: 2025-12-03 19:14:30.027 348329 INFO nova.compute.manager [-] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] VM Stopped (Lifecycle Event)#033[00m
Dec  3 19:14:30 compute-0 nova_compute[348325]: 2025-12-03 19:14:30.085 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:30 compute-0 nova_compute[348325]: 2025-12-03 19:14:30.113 348329 DEBUG nova.compute.manager [None req-2af47d41-58a5-4d10-91f3-5c9862a8fe35 - - - - - -] [instance: a4fc45c7-44e4-4b50-a3e0-98de13268f88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:14:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 5 op/s
Dec  3 19:14:31 compute-0 nova_compute[348325]: 2025-12-03 19:14:31.053 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:31 compute-0 nova_compute[348325]: 2025-12-03 19:14:31.054 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:31 compute-0 openstack_network_exporter[365222]: ERROR   19:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:14:31 compute-0 openstack_network_exporter[365222]: ERROR   19:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:14:31 compute-0 openstack_network_exporter[365222]: ERROR   19:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:14:31 compute-0 openstack_network_exporter[365222]: ERROR   19:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:14:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:14:31 compute-0 openstack_network_exporter[365222]: ERROR   19:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:14:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:14:31 compute-0 podman[469725]: 2025-12-03 19:14:31.996314825 +0000 UTC m=+0.142279985 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  3 19:14:32 compute-0 podman[469724]: 2025-12-03 19:14:32.085018815 +0000 UTC m=+0.233473824 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 19:14:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:14:32 compute-0 nova_compute[348325]: 2025-12-03 19:14:32.775 348329 DEBUG oslo_concurrency.lockutils [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:32 compute-0 nova_compute[348325]: 2025-12-03 19:14:32.777 348329 DEBUG oslo_concurrency.lockutils [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:32 compute-0 nova_compute[348325]: 2025-12-03 19:14:32.778 348329 DEBUG oslo_concurrency.lockutils [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:32 compute-0 nova_compute[348325]: 2025-12-03 19:14:32.779 348329 DEBUG oslo_concurrency.lockutils [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:32 compute-0 nova_compute[348325]: 2025-12-03 19:14:32.779 348329 DEBUG oslo_concurrency.lockutils [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:32 compute-0 nova_compute[348325]: 2025-12-03 19:14:32.781 348329 INFO nova.compute.manager [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Terminating instance#033[00m
Dec  3 19:14:32 compute-0 nova_compute[348325]: 2025-12-03 19:14:32.786 348329 DEBUG nova.compute.manager [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 19:14:32 compute-0 kernel: tapb761f609-27 (unregistering): left promiscuous mode
Dec  3 19:14:32 compute-0 NetworkManager[49087]: <info>  [1764789272.9170] device (tapb761f609-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 19:14:32 compute-0 nova_compute[348325]: 2025-12-03 19:14:32.939 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:32 compute-0 ovn_controller[89305]: 2025-12-03T19:14:32Z|00183|binding|INFO|Releasing lport b761f609-2787-4aa2-9b1c-cc5b41d2373d from this chassis (sb_readonly=0)
Dec  3 19:14:32 compute-0 ovn_controller[89305]: 2025-12-03T19:14:32Z|00184|binding|INFO|Setting lport b761f609-2787-4aa2-9b1c-cc5b41d2373d down in Southbound
Dec  3 19:14:32 compute-0 ovn_controller[89305]: 2025-12-03T19:14:32Z|00185|binding|INFO|Removing iface tapb761f609-27 ovn-installed in OVS
Dec  3 19:14:32 compute-0 nova_compute[348325]: 2025-12-03 19:14:32.943 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:32 compute-0 nova_compute[348325]: 2025-12-03 19:14:32.982 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:33 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec  3 19:14:33 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 6min 53.916s CPU time.
Dec  3 19:14:33 compute-0 systemd-machined[138702]: Machine qemu-16-instance-0000000f terminated.
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.026 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:da:52 10.100.3.71'], port_security=['fa:16:3e:2c:da:52 10.100.3.71'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.71/16', 'neutron:device_id': 'a364994c-8442-4a4c-bd6b-f3a2d31e4483', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04e258c0-609e-4010-a306-af20506c3a9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd29cef7b24ee4d30b2b3f5027ec6aafb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e47f9e7-514d-4fc2-9225-d05512482dee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b71f2b6d-7f9c-430c-a162-af2bdc131d68, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>], logical_port=b761f609-2787-4aa2-9b1c-cc5b41d2373d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f81e3e96760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.030 286999 INFO neutron.agent.ovn.metadata.agent [-] Port b761f609-2787-4aa2-9b1c-cc5b41d2373d in datapath 04e258c0-609e-4010-a306-af20506c3a9d unbound from our chassis#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.034 286999 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 04e258c0-609e-4010-a306-af20506c3a9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.039 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[76788bb0-7fb8-4cdb-a65c-83a78c1ca21f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.041 286999 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d namespace which is not needed anymore#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.247 348329 INFO nova.virt.libvirt.driver [-] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Instance destroyed successfully.#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.249 348329 DEBUG nova.objects.instance [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lazy-loading 'resources' on Instance uuid a364994c-8442-4a4c-bd6b-f3a2d31e4483 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 19:14:33 compute-0 neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d[447231]: [NOTICE]   (447235) : haproxy version is 2.8.14-c23fe91
Dec  3 19:14:33 compute-0 neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d[447231]: [NOTICE]   (447235) : path to executable is /usr/sbin/haproxy
Dec  3 19:14:33 compute-0 neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d[447231]: [WARNING]  (447235) : Exiting Master process...
Dec  3 19:14:33 compute-0 neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d[447231]: [ALERT]    (447235) : Current worker (447237) exited with code 143 (Terminated)
Dec  3 19:14:33 compute-0 neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d[447231]: [WARNING]  (447235) : All workers exited. Exiting... (0)
Dec  3 19:14:33 compute-0 systemd[1]: libpod-581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d.scope: Deactivated successfully.
Dec  3 19:14:33 compute-0 podman[469790]: 2025-12-03 19:14:33.31756781 +0000 UTC m=+0.084841348 container died 581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d-userdata-shm.mount: Deactivated successfully.
Dec  3 19:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b04f4a59d30f163fc6f6daec1ca37a27c6f6fe3f3a8752326fdb2dfce0987373-merged.mount: Deactivated successfully.
Dec  3 19:14:33 compute-0 podman[469790]: 2025-12-03 19:14:33.410428788 +0000 UTC m=+0.177702306 container cleanup 581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 19:14:33 compute-0 systemd[1]: libpod-conmon-581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d.scope: Deactivated successfully.
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.434 348329 DEBUG nova.virt.libvirt.vif [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T19:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0714371-asg-eacwc356yfed-ehdrupxp3h3u-navxh3tm2qn5',id=15,image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T19:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d721c97c-b9eb-44f9-a826-1b99239b172a'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d29cef7b24ee4d30b2b3f5027ec6aafb',ramdisk_id='',reservation_id='r-jne042ye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='29e9e995-880d-46f8-bdd0-149d4e107ea9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-463817161',owner_user_name='tempest-PrometheusGabbiTest-463817161-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T19:04:23Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5b5e6c2a7cce4e3b96611203def80123',uuid=a364994c-8442-4a4c-bd6b-f3a2d31e4483,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.436 348329 DEBUG nova.network.os_vif_util [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converting VIF {"id": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "address": "fa:16:3e:2c:da:52", "network": {"id": "04e258c0-609e-4010-a306-af20506c3a9d", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d29cef7b24ee4d30b2b3f5027ec6aafb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb761f609-27", "ovs_interfaceid": "b761f609-2787-4aa2-9b1c-cc5b41d2373d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.437 348329 DEBUG nova.network.os_vif_util [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:da:52,bridge_name='br-int',has_traffic_filtering=True,id=b761f609-2787-4aa2-9b1c-cc5b41d2373d,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb761f609-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.438 348329 DEBUG os_vif [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:da:52,bridge_name='br-int',has_traffic_filtering=True,id=b761f609-2787-4aa2-9b1c-cc5b41d2373d,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb761f609-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.439 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.440 348329 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb761f609-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.445 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.453 348329 INFO os_vif [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:da:52,bridge_name='br-int',has_traffic_filtering=True,id=b761f609-2787-4aa2-9b1c-cc5b41d2373d,network=Network(04e258c0-609e-4010-a306-af20506c3a9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb761f609-27')#033[00m
Dec  3 19:14:33 compute-0 podman[469828]: 2025-12-03 19:14:33.564790019 +0000 UTC m=+0.105615813 container remove 581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.579 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[a714cbd1-24ec-4bc5-a60a-643ff6d4acd0]: (4, ('Wed Dec  3 07:14:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d (581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d)\n581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d\nWed Dec  3 07:14:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d (581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d)\n581205e24bb439d72cded2497f3fe4a978e6bbb09e13466e64fb3c4a9f15859d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.583 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[cc09831d-77b7-4eb4-9ee5-65e57e0ef55d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.585 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04e258c0-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.589 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:33 compute-0 kernel: tap04e258c0-60: left promiscuous mode
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.615 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.621 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[f742e4dc-aab9-4ecf-85cc-36cb41bf5e73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.634 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[14d1240f-c142-4cd6-80e3-493dd85932d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.637 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f622da-08bb-4ac4-aeb0-681d27070607]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.666 411759 DEBUG oslo.privsep.daemon [-] privsep: reply[5d98f5e3-cb61-4c26-a884-f09744f3d147]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666575, 'reachable_time': 34494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 469860, 'error': None, 'target': 'ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.672 287110 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-04e258c0-609e-4010-a306-af20506c3a9d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 19:14:33 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:14:33.672 287110 DEBUG oslo.privsep.daemon [-] privsep: reply[99645840-aa6b-4349-9a35-0475a6be5871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 19:14:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d04e258c0\x2d609e\x2d4010\x2da306\x2daf20506c3a9d.mount: Deactivated successfully.
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.959 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.968 348329 DEBUG nova.compute.manager [req-3091d32a-ec22-4965-bea7-858cbd3df6f1 req-fbae8f6e-17f8-4b23-8f38-fcb5985156ef 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Received event network-vif-unplugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.968 348329 DEBUG oslo_concurrency.lockutils [req-3091d32a-ec22-4965-bea7-858cbd3df6f1 req-fbae8f6e-17f8-4b23-8f38-fcb5985156ef 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.969 348329 DEBUG oslo_concurrency.lockutils [req-3091d32a-ec22-4965-bea7-858cbd3df6f1 req-fbae8f6e-17f8-4b23-8f38-fcb5985156ef 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.969 348329 DEBUG oslo_concurrency.lockutils [req-3091d32a-ec22-4965-bea7-858cbd3df6f1 req-fbae8f6e-17f8-4b23-8f38-fcb5985156ef 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.969 348329 DEBUG nova.compute.manager [req-3091d32a-ec22-4965-bea7-858cbd3df6f1 req-fbae8f6e-17f8-4b23-8f38-fcb5985156ef 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] No waiting events found dispatching network-vif-unplugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:14:33 compute-0 nova_compute[348325]: 2025-12-03 19:14:33.970 348329 DEBUG nova.compute.manager [req-3091d32a-ec22-4965-bea7-858cbd3df6f1 req-fbae8f6e-17f8-4b23-8f38-fcb5985156ef 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Received event network-vif-unplugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 19:14:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:14:34 compute-0 nova_compute[348325]: 2025-12-03 19:14:34.401 348329 INFO nova.virt.libvirt.driver [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Deleting instance files /var/lib/nova/instances/a364994c-8442-4a4c-bd6b-f3a2d31e4483_del#033[00m
Dec  3 19:14:34 compute-0 nova_compute[348325]: 2025-12-03 19:14:34.403 348329 INFO nova.virt.libvirt.driver [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Deletion of /var/lib/nova/instances/a364994c-8442-4a4c-bd6b-f3a2d31e4483_del complete#033[00m
Dec  3 19:14:34 compute-0 nova_compute[348325]: 2025-12-03 19:14:34.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:34 compute-0 nova_compute[348325]: 2025-12-03 19:14:34.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:34 compute-0 nova_compute[348325]: 2025-12-03 19:14:34.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:34 compute-0 nova_compute[348325]: 2025-12-03 19:14:34.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 19:14:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:14:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2709 syncs, 3.69 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 551 writes, 1719 keys, 551 commit groups, 1.0 writes per commit group, ingest: 1.95 MB, 0.00 MB/s#012Interval WAL: 551 writes, 236 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:14:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:36 compute-0 nova_compute[348325]: 2025-12-03 19:14:36.077 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 19:14:36 compute-0 nova_compute[348325]: 2025-12-03 19:14:36.095 348329 INFO nova.compute.manager [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Took 3.31 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 19:14:36 compute-0 nova_compute[348325]: 2025-12-03 19:14:36.097 348329 DEBUG oslo.service.loopingcall [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 19:14:36 compute-0 nova_compute[348325]: 2025-12-03 19:14:36.097 348329 DEBUG nova.compute.manager [-] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 19:14:36 compute-0 nova_compute[348325]: 2025-12-03 19:14:36.098 348329 DEBUG nova.network.neutron [-] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 19:14:36 compute-0 nova_compute[348325]: 2025-12-03 19:14:36.146 348329 DEBUG nova.compute.manager [req-203b76cc-699b-457f-bb6e-f0a2b5ef429a req-91cfafbd-4b4d-4a30-8e4c-040769b53862 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Received event network-vif-plugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:14:36 compute-0 nova_compute[348325]: 2025-12-03 19:14:36.146 348329 DEBUG oslo_concurrency.lockutils [req-203b76cc-699b-457f-bb6e-f0a2b5ef429a req-91cfafbd-4b4d-4a30-8e4c-040769b53862 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Acquiring lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:36 compute-0 nova_compute[348325]: 2025-12-03 19:14:36.146 348329 DEBUG oslo_concurrency.lockutils [req-203b76cc-699b-457f-bb6e-f0a2b5ef429a req-91cfafbd-4b4d-4a30-8e4c-040769b53862 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:36 compute-0 nova_compute[348325]: 2025-12-03 19:14:36.147 348329 DEBUG oslo_concurrency.lockutils [req-203b76cc-699b-457f-bb6e-f0a2b5ef429a req-91cfafbd-4b4d-4a30-8e4c-040769b53862 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:36 compute-0 nova_compute[348325]: 2025-12-03 19:14:36.147 348329 DEBUG nova.compute.manager [req-203b76cc-699b-457f-bb6e-f0a2b5ef429a req-91cfafbd-4b4d-4a30-8e4c-040769b53862 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] No waiting events found dispatching network-vif-plugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 19:14:36 compute-0 nova_compute[348325]: 2025-12-03 19:14:36.147 348329 WARNING nova.compute.manager [req-203b76cc-699b-457f-bb6e-f0a2b5ef429a req-91cfafbd-4b4d-4a30-8e4c-040769b53862 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Received unexpected event network-vif-plugged-b761f609-2787-4aa2-9b1c-cc5b41d2373d for instance with vm_state active and task_state deleting.#033[00m
Dec  3 19:14:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 124 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 10 op/s
Dec  3 19:14:37 compute-0 nova_compute[348325]: 2025-12-03 19:14:37.079 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:37 compute-0 nova_compute[348325]: 2025-12-03 19:14:37.181 348329 DEBUG nova.network.neutron [-] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 19:14:37 compute-0 nova_compute[348325]: 2025-12-03 19:14:37.204 348329 INFO nova.compute.manager [-] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Took 1.11 seconds to deallocate network for instance.#033[00m
Dec  3 19:14:37 compute-0 nova_compute[348325]: 2025-12-03 19:14:37.251 348329 DEBUG oslo_concurrency.lockutils [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:37 compute-0 nova_compute[348325]: 2025-12-03 19:14:37.252 348329 DEBUG oslo_concurrency.lockutils [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:37 compute-0 nova_compute[348325]: 2025-12-03 19:14:37.298 348329 DEBUG nova.compute.manager [req-c568800a-a2d9-4915-8d68-99b49b51d645 req-c000af05-ba97-4176-9fbe-9fc6fb1ba84d 651c9a8139354a55b4f0babc0f3781ec 4eb33dd5453b4c00baaa1d8362126e53 - - default default] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Received event network-vif-deleted-b761f609-2787-4aa2-9b1c-cc5b41d2373d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 19:14:37 compute-0 nova_compute[348325]: 2025-12-03 19:14:37.329 348329 DEBUG oslo_concurrency.processutils [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:14:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:14:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1216116280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:14:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:14:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1216116280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:14:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:14:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2604978470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:14:37 compute-0 ceph-mgr[193091]: [devicehealth INFO root] Check health
Dec  3 19:14:37 compute-0 nova_compute[348325]: 2025-12-03 19:14:37.853 348329 DEBUG oslo_concurrency.processutils [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:14:37 compute-0 nova_compute[348325]: 2025-12-03 19:14:37.868 348329 DEBUG nova.compute.provider_tree [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:14:37 compute-0 nova_compute[348325]: 2025-12-03 19:14:37.907 348329 DEBUG nova.scheduler.client.report [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:14:37 compute-0 nova_compute[348325]: 2025-12-03 19:14:37.935 348329 DEBUG oslo_concurrency.lockutils [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:38 compute-0 nova_compute[348325]: 2025-12-03 19:14:38.049 348329 INFO nova.scheduler.client.report [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Deleted allocations for instance a364994c-8442-4a4c-bd6b-f3a2d31e4483#033[00m
Dec  3 19:14:38 compute-0 nova_compute[348325]: 2025-12-03 19:14:38.125 348329 DEBUG oslo_concurrency.lockutils [None req-c977220b-d537-49ce-b32a-5e0613ba8c70 5b5e6c2a7cce4e3b96611203def80123 d29cef7b24ee4d30b2b3f5027ec6aafb - - default default] Lock "a364994c-8442-4a4c-bd6b-f3a2d31e4483" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:14:38 compute-0 nova_compute[348325]: 2025-12-03 19:14:38.445 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:38 compute-0 nova_compute[348325]: 2025-12-03 19:14:38.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:38 compute-0 nova_compute[348325]: 2025-12-03 19:14:38.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:14:38 compute-0 nova_compute[348325]: 2025-12-03 19:14:38.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:14:38 compute-0 nova_compute[348325]: 2025-12-03 19:14:38.505 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 19:14:38 compute-0 nova_compute[348325]: 2025-12-03 19:14:38.506 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:38 compute-0 nova_compute[348325]: 2025-12-03 19:14:38.960 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:14:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:14:42 compute-0 podman[469888]: 2025-12-03 19:14:42.970762857 +0000 UTC m=+0.116452100 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 19:14:42 compute-0 podman[469889]: 2025-12-03 19:14:42.976576276 +0000 UTC m=+0.114617447 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter)
Dec  3 19:14:42 compute-0 podman[469887]: 2025-12-03 19:14:42.997909933 +0000 UTC m=+0.150152141 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 19:14:43 compute-0 nova_compute[348325]: 2025-12-03 19:14:43.451 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:43 compute-0 nova_compute[348325]: 2025-12-03 19:14:43.963 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:14:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:14:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:14:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:14:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:14:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:14:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:14:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:45 compute-0 nova_compute[348325]: 2025-12-03 19:14:45.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:45 compute-0 nova_compute[348325]: 2025-12-03 19:14:45.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:14:45 compute-0 nova_compute[348325]: 2025-12-03 19:14:45.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 19:14:48 compute-0 nova_compute[348325]: 2025-12-03 19:14:48.242 348329 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764789273.2410147, a364994c-8442-4a4c-bd6b-f3a2d31e4483 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 19:14:48 compute-0 nova_compute[348325]: 2025-12-03 19:14:48.243 348329 INFO nova.compute.manager [-] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] VM Stopped (Lifecycle Event)#033[00m
Dec  3 19:14:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Dec  3 19:14:48 compute-0 nova_compute[348325]: 2025-12-03 19:14:48.391 348329 DEBUG nova.compute.manager [None req-a502f182-7c74-4c70-90aa-d994dad04030 - - - - - -] [instance: a364994c-8442-4a4c-bd6b-f3a2d31e4483] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 19:14:48 compute-0 nova_compute[348325]: 2025-12-03 19:14:48.456 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec  3 19:14:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec  3 19:14:48 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec  3 19:14:48 compute-0 nova_compute[348325]: 2025-12-03 19:14:48.543 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:48 compute-0 nova_compute[348325]: 2025-12-03 19:14:48.543 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 19:14:48 compute-0 nova_compute[348325]: 2025-12-03 19:14:48.965 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:48 compute-0 podman[469945]: 2025-12-03 19:14:48.986250929 +0000 UTC m=+0.130673579 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, distribution-scope=public, container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.29.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 19:14:48 compute-0 podman[469946]: 2025-12-03 19:14:48.994909964 +0000 UTC m=+0.126635552 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  3 19:14:49 compute-0 podman[469947]: 2025-12-03 19:14:49.020708537 +0000 UTC m=+0.158305305 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  3 19:14:49 compute-0 nova_compute[348325]: 2025-12-03 19:14:49.683 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 921 B/s wr, 4 op/s
Dec  3 19:14:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec  3 19:14:50 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec  3 19:14:50 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec  3 19:14:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 MiB/s wr, 29 op/s
Dec  3 19:14:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:14:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:14:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:14:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:14:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:14:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:14:52 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev df5a01b7-6ff1-4b9e-8e23-dcbee6ea3eb0 does not exist
Dec  3 19:14:52 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 0b9068cb-b637-4a93-98aa-3fe55d7e33d7 does not exist
Dec  3 19:14:52 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 01b61723-482b-43ed-9cc0-a9e2baad9da8 does not exist
Dec  3 19:14:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:14:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:14:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:14:52 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:14:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:14:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:14:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec  3 19:14:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:14:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:14:52 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:14:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec  3 19:14:52 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec  3 19:14:53 compute-0 nova_compute[348325]: 2025-12-03 19:14:53.460 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:53 compute-0 nova_compute[348325]: 2025-12-03 19:14:53.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:53 compute-0 podman[470274]: 2025-12-03 19:14:53.537189838 +0000 UTC m=+0.079406710 container create 574bf7a886829a8ef24d7d61c9fe3b891c049a23f7deeebf56cad3262b795058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:14:53 compute-0 podman[470274]: 2025-12-03 19:14:53.504769877 +0000 UTC m=+0.046986839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:14:53 compute-0 systemd[1]: Started libpod-conmon-574bf7a886829a8ef24d7d61c9fe3b891c049a23f7deeebf56cad3262b795058.scope.
Dec  3 19:14:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:14:53 compute-0 podman[470274]: 2025-12-03 19:14:53.714799811 +0000 UTC m=+0.257016763 container init 574bf7a886829a8ef24d7d61c9fe3b891c049a23f7deeebf56cad3262b795058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 19:14:53 compute-0 podman[470274]: 2025-12-03 19:14:53.726907409 +0000 UTC m=+0.269124311 container start 574bf7a886829a8ef24d7d61c9fe3b891c049a23f7deeebf56cad3262b795058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:14:53 compute-0 podman[470274]: 2025-12-03 19:14:53.733228409 +0000 UTC m=+0.275445351 container attach 574bf7a886829a8ef24d7d61c9fe3b891c049a23f7deeebf56cad3262b795058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec  3 19:14:53 compute-0 practical_hertz[470291]: 167 167
Dec  3 19:14:53 compute-0 systemd[1]: libpod-574bf7a886829a8ef24d7d61c9fe3b891c049a23f7deeebf56cad3262b795058.scope: Deactivated successfully.
Dec  3 19:14:53 compute-0 podman[470274]: 2025-12-03 19:14:53.738062724 +0000 UTC m=+0.280279656 container died 574bf7a886829a8ef24d7d61c9fe3b891c049a23f7deeebf56cad3262b795058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:14:53 compute-0 nova_compute[348325]: 2025-12-03 19:14:53.783 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:53 compute-0 nova_compute[348325]: 2025-12-03 19:14:53.785 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:53 compute-0 nova_compute[348325]: 2025-12-03 19:14:53.786 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:53 compute-0 nova_compute[348325]: 2025-12-03 19:14:53.788 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fb01f381c882bc705b6cde442f888d20fea480301e702bf37074943ead71eb5-merged.mount: Deactivated successfully.
Dec  3 19:14:53 compute-0 nova_compute[348325]: 2025-12-03 19:14:53.789 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:14:53 compute-0 podman[470274]: 2025-12-03 19:14:53.818941337 +0000 UTC m=+0.361158249 container remove 574bf7a886829a8ef24d7d61c9fe3b891c049a23f7deeebf56cad3262b795058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:14:53 compute-0 systemd[1]: libpod-conmon-574bf7a886829a8ef24d7d61c9fe3b891c049a23f7deeebf56cad3262b795058.scope: Deactivated successfully.
Dec  3 19:14:53 compute-0 nova_compute[348325]: 2025-12-03 19:14:53.967 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:54 compute-0 podman[470334]: 2025-12-03 19:14:54.094696694 +0000 UTC m=+0.100317476 container create ca0c9a325e6ea6768a9a800cc11a755c3f111a9dba2e2933116fc63c9d877ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 19:14:54 compute-0 podman[470334]: 2025-12-03 19:14:54.057714675 +0000 UTC m=+0.063335507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:14:54 compute-0 systemd[1]: Started libpod-conmon-ca0c9a325e6ea6768a9a800cc11a755c3f111a9dba2e2933116fc63c9d877ecd.scope.
Dec  3 19:14:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46875ccba6ca1daeb6455d71f421ab327f8c95cf97bc5277b39f96e5ec0521ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46875ccba6ca1daeb6455d71f421ab327f8c95cf97bc5277b39f96e5ec0521ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46875ccba6ca1daeb6455d71f421ab327f8c95cf97bc5277b39f96e5ec0521ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46875ccba6ca1daeb6455d71f421ab327f8c95cf97bc5277b39f96e5ec0521ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46875ccba6ca1daeb6455d71f421ab327f8c95cf97bc5277b39f96e5ec0521ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:14:54 compute-0 podman[470334]: 2025-12-03 19:14:54.260131677 +0000 UTC m=+0.265752459 container init ca0c9a325e6ea6768a9a800cc11a755c3f111a9dba2e2933116fc63c9d877ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 19:14:54 compute-0 podman[470334]: 2025-12-03 19:14:54.286975626 +0000 UTC m=+0.292596378 container start ca0c9a325e6ea6768a9a800cc11a755c3f111a9dba2e2933116fc63c9d877ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:14:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:14:54 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480567514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:14:54 compute-0 podman[470334]: 2025-12-03 19:14:54.292830095 +0000 UTC m=+0.298450877 container attach ca0c9a325e6ea6768a9a800cc11a755c3f111a9dba2e2933116fc63c9d877ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 19:14:54 compute-0 nova_compute[348325]: 2025-12-03 19:14:54.316 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:14:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.4 MiB/s wr, 82 op/s
Dec  3 19:14:54 compute-0 nova_compute[348325]: 2025-12-03 19:14:54.793 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:14:54 compute-0 nova_compute[348325]: 2025-12-03 19:14:54.795 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3956MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:14:54 compute-0 nova_compute[348325]: 2025-12-03 19:14:54.795 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:14:54 compute-0 nova_compute[348325]: 2025-12-03 19:14:54.796 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:14:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec  3 19:14:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec  3 19:14:54 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec  3 19:14:55 compute-0 nova_compute[348325]: 2025-12-03 19:14:55.038 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:14:55 compute-0 nova_compute[348325]: 2025-12-03 19:14:55.039 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:14:55 compute-0 nova_compute[348325]: 2025-12-03 19:14:55.116 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:14:55 compute-0 modest_carver[470350]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:14:55 compute-0 modest_carver[470350]: --> relative data size: 1.0
Dec  3 19:14:55 compute-0 modest_carver[470350]: --> All data devices are unavailable
Dec  3 19:14:55 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:14:55 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1545002295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:14:55 compute-0 nova_compute[348325]: 2025-12-03 19:14:55.675 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:14:55 compute-0 systemd[1]: libpod-ca0c9a325e6ea6768a9a800cc11a755c3f111a9dba2e2933116fc63c9d877ecd.scope: Deactivated successfully.
Dec  3 19:14:55 compute-0 podman[470334]: 2025-12-03 19:14:55.685428466 +0000 UTC m=+1.691049308 container died ca0c9a325e6ea6768a9a800cc11a755c3f111a9dba2e2933116fc63c9d877ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 19:14:55 compute-0 systemd[1]: libpod-ca0c9a325e6ea6768a9a800cc11a755c3f111a9dba2e2933116fc63c9d877ecd.scope: Consumed 1.246s CPU time.
Dec  3 19:14:55 compute-0 nova_compute[348325]: 2025-12-03 19:14:55.698 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:14:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-46875ccba6ca1daeb6455d71f421ab327f8c95cf97bc5277b39f96e5ec0521ae-merged.mount: Deactivated successfully.
Dec  3 19:14:55 compute-0 nova_compute[348325]: 2025-12-03 19:14:55.751 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:14:55 compute-0 podman[470334]: 2025-12-03 19:14:55.792994574 +0000 UTC m=+1.798615366 container remove ca0c9a325e6ea6768a9a800cc11a755c3f111a9dba2e2933116fc63c9d877ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:14:55 compute-0 systemd[1]: libpod-conmon-ca0c9a325e6ea6768a9a800cc11a755c3f111a9dba2e2933116fc63c9d877ecd.scope: Deactivated successfully.
Dec  3 19:14:55 compute-0 podman[470405]: 2025-12-03 19:14:55.84077944 +0000 UTC m=+0.107187729 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 19:14:55 compute-0 nova_compute[348325]: 2025-12-03 19:14:55.921 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:14:55 compute-0 nova_compute[348325]: 2025-12-03 19:14:55.922 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:14:56 compute-0 nova_compute[348325]: 2025-12-03 19:14:56.052 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.4 MiB/s wr, 91 op/s
Dec  3 19:14:56 compute-0 podman[470580]: 2025-12-03 19:14:56.958822685 +0000 UTC m=+0.089715914 container create 544253a824fac90e03251aadd14f3cd1fa33190adbe72d16efac6da0240d953f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec  3 19:14:57 compute-0 podman[470580]: 2025-12-03 19:14:56.924941229 +0000 UTC m=+0.055834498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:14:57 compute-0 systemd[1]: Started libpod-conmon-544253a824fac90e03251aadd14f3cd1fa33190adbe72d16efac6da0240d953f.scope.
Dec  3 19:14:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:14:57 compute-0 podman[470580]: 2025-12-03 19:14:57.108556805 +0000 UTC m=+0.239450034 container init 544253a824fac90e03251aadd14f3cd1fa33190adbe72d16efac6da0240d953f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 19:14:57 compute-0 podman[470580]: 2025-12-03 19:14:57.125694602 +0000 UTC m=+0.256587841 container start 544253a824fac90e03251aadd14f3cd1fa33190adbe72d16efac6da0240d953f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:14:57 compute-0 podman[470580]: 2025-12-03 19:14:57.13229491 +0000 UTC m=+0.263188199 container attach 544253a824fac90e03251aadd14f3cd1fa33190adbe72d16efac6da0240d953f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:14:57 compute-0 naughty_tu[470596]: 167 167
Dec  3 19:14:57 compute-0 systemd[1]: libpod-544253a824fac90e03251aadd14f3cd1fa33190adbe72d16efac6da0240d953f.scope: Deactivated successfully.
Dec  3 19:14:57 compute-0 podman[470580]: 2025-12-03 19:14:57.137240137 +0000 UTC m=+0.268133376 container died 544253a824fac90e03251aadd14f3cd1fa33190adbe72d16efac6da0240d953f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d31c4335ba53c88ddced9eb010e1481bbddd77c0efdade243a4d65ca0ae508e1-merged.mount: Deactivated successfully.
Dec  3 19:14:57 compute-0 podman[470580]: 2025-12-03 19:14:57.209866794 +0000 UTC m=+0.340760003 container remove 544253a824fac90e03251aadd14f3cd1fa33190adbe72d16efac6da0240d953f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:14:57 compute-0 systemd[1]: libpod-conmon-544253a824fac90e03251aadd14f3cd1fa33190adbe72d16efac6da0240d953f.scope: Deactivated successfully.
Dec  3 19:14:57 compute-0 podman[470620]: 2025-12-03 19:14:57.530012736 +0000 UTC m=+0.107361294 container create 237d67797c11bf2d7e6139fa4d041af6af53b67f05bfe9a68ae9e9c8b1073153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:14:57 compute-0 podman[470620]: 2025-12-03 19:14:57.477091588 +0000 UTC m=+0.054440186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:14:57 compute-0 systemd[1]: Started libpod-conmon-237d67797c11bf2d7e6139fa4d041af6af53b67f05bfe9a68ae9e9c8b1073153.scope.
Dec  3 19:14:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/508fa32bec3c98622dfceefdc488bee1b25a5d92fa1c346166d075fced71d616/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/508fa32bec3c98622dfceefdc488bee1b25a5d92fa1c346166d075fced71d616/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/508fa32bec3c98622dfceefdc488bee1b25a5d92fa1c346166d075fced71d616/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/508fa32bec3c98622dfceefdc488bee1b25a5d92fa1c346166d075fced71d616/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:14:57 compute-0 podman[470620]: 2025-12-03 19:14:57.682625065 +0000 UTC m=+0.259973673 container init 237d67797c11bf2d7e6139fa4d041af6af53b67f05bfe9a68ae9e9c8b1073153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 19:14:57 compute-0 podman[470620]: 2025-12-03 19:14:57.711391398 +0000 UTC m=+0.288739956 container start 237d67797c11bf2d7e6139fa4d041af6af53b67f05bfe9a68ae9e9c8b1073153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 19:14:57 compute-0 podman[470620]: 2025-12-03 19:14:57.718797855 +0000 UTC m=+0.296146463 container attach 237d67797c11bf2d7e6139fa4d041af6af53b67f05bfe9a68ae9e9c8b1073153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendeleev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:14:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.6 MiB/s wr, 83 op/s
Dec  3 19:14:58 compute-0 nova_compute[348325]: 2025-12-03 19:14:58.465 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]: {
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:    "0": [
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:        {
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "devices": [
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "/dev/loop3"
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            ],
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_name": "ceph_lv0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_size": "21470642176",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "name": "ceph_lv0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "tags": {
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.cluster_name": "ceph",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.crush_device_class": "",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.encrypted": "0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.osd_id": "0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.type": "block",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.vdo": "0"
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            },
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "type": "block",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "vg_name": "ceph_vg0"
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:        }
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:    ],
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:    "1": [
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:        {
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "devices": [
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "/dev/loop4"
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            ],
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_name": "ceph_lv1",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_size": "21470642176",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "name": "ceph_lv1",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "tags": {
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.cluster_name": "ceph",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.crush_device_class": "",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.encrypted": "0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.osd_id": "1",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.type": "block",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.vdo": "0"
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            },
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "type": "block",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "vg_name": "ceph_vg1"
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:        }
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:    ],
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:    "2": [
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:        {
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "devices": [
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "/dev/loop5"
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            ],
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_name": "ceph_lv2",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_size": "21470642176",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "name": "ceph_lv2",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "tags": {
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.cluster_name": "ceph",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.crush_device_class": "",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.encrypted": "0",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.osd_id": "2",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.type": "block",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:                "ceph.vdo": "0"
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            },
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "type": "block",
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:            "vg_name": "ceph_vg2"
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:        }
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]:    ]
Dec  3 19:14:58 compute-0 thirsty_mendeleev[470635]: }
Dec  3 19:14:58 compute-0 systemd[1]: libpod-237d67797c11bf2d7e6139fa4d041af6af53b67f05bfe9a68ae9e9c8b1073153.scope: Deactivated successfully.
Dec  3 19:14:58 compute-0 podman[470620]: 2025-12-03 19:14:58.602702891 +0000 UTC m=+1.180051469 container died 237d67797c11bf2d7e6139fa4d041af6af53b67f05bfe9a68ae9e9c8b1073153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:14:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-508fa32bec3c98622dfceefdc488bee1b25a5d92fa1c346166d075fced71d616-merged.mount: Deactivated successfully.
Dec  3 19:14:58 compute-0 podman[470620]: 2025-12-03 19:14:58.695232382 +0000 UTC m=+1.272580910 container remove 237d67797c11bf2d7e6139fa4d041af6af53b67f05bfe9a68ae9e9c8b1073153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendeleev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:14:58 compute-0 systemd[1]: libpod-conmon-237d67797c11bf2d7e6139fa4d041af6af53b67f05bfe9a68ae9e9c8b1073153.scope: Deactivated successfully.
Dec  3 19:14:58 compute-0 nova_compute[348325]: 2025-12-03 19:14:58.970 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:14:59 compute-0 podman[158200]: time="2025-12-03T19:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:14:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:14:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8191 "" "Go-http-client/1.1"
Dec  3 19:14:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:14:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Dec  3 19:14:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Dec  3 19:14:59 compute-0 ceph-mon[192802]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec  3 19:14:59 compute-0 nova_compute[348325]: 2025-12-03 19:14:59.973 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:14:59 compute-0 podman[470792]: 2025-12-03 19:14:59.998330946 +0000 UTC m=+0.076161203 container create 0b1ec1c1c0e6b2696a81bec7f202b1618c00d774da06c3216fa29c8d15ca09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 19:15:00 compute-0 podman[470792]: 2025-12-03 19:14:59.961017628 +0000 UTC m=+0.038847965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:15:00 compute-0 systemd[1]: Started libpod-conmon-0b1ec1c1c0e6b2696a81bec7f202b1618c00d774da06c3216fa29c8d15ca09c9.scope.
Dec  3 19:15:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:15:00 compute-0 podman[470792]: 2025-12-03 19:15:00.153780452 +0000 UTC m=+0.231610729 container init 0b1ec1c1c0e6b2696a81bec7f202b1618c00d774da06c3216fa29c8d15ca09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:15:00 compute-0 podman[470792]: 2025-12-03 19:15:00.171986835 +0000 UTC m=+0.249817092 container start 0b1ec1c1c0e6b2696a81bec7f202b1618c00d774da06c3216fa29c8d15ca09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:15:00 compute-0 podman[470792]: 2025-12-03 19:15:00.179002272 +0000 UTC m=+0.256832549 container attach 0b1ec1c1c0e6b2696a81bec7f202b1618c00d774da06c3216fa29c8d15ca09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 19:15:00 compute-0 interesting_nobel[470807]: 167 167
Dec  3 19:15:00 compute-0 systemd[1]: libpod-0b1ec1c1c0e6b2696a81bec7f202b1618c00d774da06c3216fa29c8d15ca09c9.scope: Deactivated successfully.
Dec  3 19:15:00 compute-0 podman[470792]: 2025-12-03 19:15:00.186237053 +0000 UTC m=+0.264067330 container died 0b1ec1c1c0e6b2696a81bec7f202b1618c00d774da06c3216fa29c8d15ca09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 19:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b360373255c7ae452361de44249d5c312a76367745fa3dd72d3aaafffad3d2de-merged.mount: Deactivated successfully.
Dec  3 19:15:00 compute-0 podman[470792]: 2025-12-03 19:15:00.275235349 +0000 UTC m=+0.353065606 container remove 0b1ec1c1c0e6b2696a81bec7f202b1618c00d774da06c3216fa29c8d15ca09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 19:15:00 compute-0 systemd[1]: libpod-conmon-0b1ec1c1c0e6b2696a81bec7f202b1618c00d774da06c3216fa29c8d15ca09c9.scope: Deactivated successfully.
Dec  3 19:15:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.0 MiB/s wr, 58 op/s
Dec  3 19:15:00 compute-0 podman[470830]: 2025-12-03 19:15:00.588830735 +0000 UTC m=+0.106175144 container create 0b21691f714fbbdd26398666d6c072b37da283193fabf8ec57b4918ab6eef412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 19:15:00 compute-0 podman[470830]: 2025-12-03 19:15:00.545890025 +0000 UTC m=+0.063234434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:15:00 compute-0 systemd[1]: Started libpod-conmon-0b21691f714fbbdd26398666d6c072b37da283193fabf8ec57b4918ab6eef412.scope.
Dec  3 19:15:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9973fa059f331b61ca121c6d55c942a975cccf6b21334e5b79a70c1773688ab4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9973fa059f331b61ca121c6d55c942a975cccf6b21334e5b79a70c1773688ab4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9973fa059f331b61ca121c6d55c942a975cccf6b21334e5b79a70c1773688ab4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9973fa059f331b61ca121c6d55c942a975cccf6b21334e5b79a70c1773688ab4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:15:00 compute-0 podman[470830]: 2025-12-03 19:15:00.774551152 +0000 UTC m=+0.291895621 container init 0b21691f714fbbdd26398666d6c072b37da283193fabf8ec57b4918ab6eef412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 19:15:00 compute-0 podman[470830]: 2025-12-03 19:15:00.801598424 +0000 UTC m=+0.318942823 container start 0b21691f714fbbdd26398666d6c072b37da283193fabf8ec57b4918ab6eef412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:15:00 compute-0 podman[470830]: 2025-12-03 19:15:00.808228292 +0000 UTC m=+0.325572691 container attach 0b21691f714fbbdd26398666d6c072b37da283193fabf8ec57b4918ab6eef412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec  3 19:15:01 compute-0 openstack_network_exporter[365222]: ERROR   19:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:15:01 compute-0 openstack_network_exporter[365222]: ERROR   19:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:15:01 compute-0 openstack_network_exporter[365222]: ERROR   19:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:15:01 compute-0 openstack_network_exporter[365222]: ERROR   19:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:15:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:15:01 compute-0 openstack_network_exporter[365222]: ERROR   19:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:15:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]: {
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "osd_id": 1,
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "type": "bluestore"
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:    },
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "osd_id": 2,
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "type": "bluestore"
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:    },
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "osd_id": 0,
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:        "type": "bluestore"
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]:    }
Dec  3 19:15:01 compute-0 upbeat_sutherland[470845]: }
Dec  3 19:15:02 compute-0 systemd[1]: libpod-0b21691f714fbbdd26398666d6c072b37da283193fabf8ec57b4918ab6eef412.scope: Deactivated successfully.
Dec  3 19:15:02 compute-0 podman[470830]: 2025-12-03 19:15:02.034795607 +0000 UTC m=+1.552140006 container died 0b21691f714fbbdd26398666d6c072b37da283193fabf8ec57b4918ab6eef412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:15:02 compute-0 systemd[1]: libpod-0b21691f714fbbdd26398666d6c072b37da283193fabf8ec57b4918ab6eef412.scope: Consumed 1.233s CPU time.
Dec  3 19:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9973fa059f331b61ca121c6d55c942a975cccf6b21334e5b79a70c1773688ab4-merged.mount: Deactivated successfully.
Dec  3 19:15:02 compute-0 podman[470830]: 2025-12-03 19:15:02.138900752 +0000 UTC m=+1.656245131 container remove 0b21691f714fbbdd26398666d6c072b37da283193fabf8ec57b4918ab6eef412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:15:02 compute-0 systemd[1]: libpod-conmon-0b21691f714fbbdd26398666d6c072b37da283193fabf8ec57b4918ab6eef412.scope: Deactivated successfully.
Dec  3 19:15:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:15:02 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:15:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:15:02 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:15:02 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 05f117e0-93d8-4a30-b3ba-a47fcfe89f5d does not exist
Dec  3 19:15:02 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 22390051-a896-485f-82e4-36677365f3ba does not exist
Dec  3 19:15:02 compute-0 podman[470879]: 2025-12-03 19:15:02.255997726 +0000 UTC m=+0.161365147 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125)
Dec  3 19:15:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.9 KiB/s wr, 23 op/s
Dec  3 19:15:02 compute-0 podman[470901]: 2025-12-03 19:15:02.420845566 +0000 UTC m=+0.217798600 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 19:15:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:15:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:15:03 compute-0 nova_compute[348325]: 2025-12-03 19:15:03.473 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:03 compute-0 nova_compute[348325]: 2025-12-03 19:15:03.973 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Dec  3 19:15:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:15:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 9 op/s
Dec  3 19:15:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:08 compute-0 nova_compute[348325]: 2025-12-03 19:15:08.478 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:08 compute-0 nova_compute[348325]: 2025-12-03 19:15:08.978 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:15:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.260 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.261 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8c393ad0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.271 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.271 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.272 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.272 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.273 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.273 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.274 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.274 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.274 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.275 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.275 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.276 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.276 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.276 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.277 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.277 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.285 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.285 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.285 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.285 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.285 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.285 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.285 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:15:13.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:15:13 compute-0 nova_compute[348325]: 2025-12-03 19:15:13.483 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:13 compute-0 nova_compute[348325]: 2025-12-03 19:15:13.981 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:13 compute-0 podman[470987]: 2025-12-03 19:15:13.987909607 +0000 UTC m=+0.122835120 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Dec  3 19:15:13 compute-0 podman[470985]: 2025-12-03 19:15:13.988575874 +0000 UTC m=+0.141683069 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:15:13 compute-0 podman[470986]: 2025-12-03 19:15:13.99386456 +0000 UTC m=+0.140720527 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:15:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:15:13 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:15:14
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.log', 'backups', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.meta']
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:15:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:15:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:15:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:16 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:15:16.827 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:15:16 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:15:16.828 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 19:15:16 compute-0 nova_compute[348325]: 2025-12-03 19:15:16.832 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:17 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:15:17.831 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:15:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:18 compute-0 nova_compute[348325]: 2025-12-03 19:15:18.487 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:18 compute-0 nova_compute[348325]: 2025-12-03 19:15:18.984 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:15:19 compute-0 podman[471049]: 2025-12-03 19:15:19.952988779 +0000 UTC m=+0.107063237 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 19:15:19 compute-0 podman[471047]: 2025-12-03 19:15:19.982294606 +0000 UTC m=+0.142469679 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, version=9.4, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, build-date=2024-09-18T21:23:30, release-0.7.12=, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 19:15:20 compute-0 podman[471048]: 2025-12-03 19:15:20.000297513 +0000 UTC m=+0.160345193 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 19:15:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:15:23.373 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:15:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:15:23.375 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:15:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:15:23.375 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:15:23 compute-0 nova_compute[348325]: 2025-12-03 19:15:23.492 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:23 compute-0 nova_compute[348325]: 2025-12-03 19:15:23.986 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:15:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:15:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:26 compute-0 podman[471106]: 2025-12-03 19:15:26.969193384 +0000 UTC m=+0.124450111 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 19:15:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:28 compute-0 nova_compute[348325]: 2025-12-03 19:15:28.497 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:28 compute-0 nova_compute[348325]: 2025-12-03 19:15:28.990 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:29 compute-0 podman[158200]: time="2025-12-03T19:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:15:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:15:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8183 "" "Go-http-client/1.1"
Dec  3 19:15:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:15:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:30 compute-0 nova_compute[348325]: 2025-12-03 19:15:30.546 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:15:30 compute-0 nova_compute[348325]: 2025-12-03 19:15:30.547 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:15:31 compute-0 openstack_network_exporter[365222]: ERROR   19:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:15:31 compute-0 openstack_network_exporter[365222]: ERROR   19:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:15:31 compute-0 openstack_network_exporter[365222]: ERROR   19:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:15:31 compute-0 openstack_network_exporter[365222]: ERROR   19:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:15:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:15:31 compute-0 openstack_network_exporter[365222]: ERROR   19:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:15:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:15:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:32 compute-0 podman[471131]: 2025-12-03 19:15:32.984184914 +0000 UTC m=+0.139535669 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec  3 19:15:33 compute-0 podman[471130]: 2025-12-03 19:15:33.01979765 +0000 UTC m=+0.180000740 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 19:15:33 compute-0 nova_compute[348325]: 2025-12-03 19:15:33.501 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:33 compute-0 nova_compute[348325]: 2025-12-03 19:15:33.994 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:15:35 compute-0 nova_compute[348325]: 2025-12-03 19:15:35.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:15:35 compute-0 nova_compute[348325]: 2025-12-03 19:15:35.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:15:35 compute-0 nova_compute[348325]: 2025-12-03 19:15:35.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:15:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:15:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2883402509' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:15:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:15:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2883402509' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:15:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:38 compute-0 nova_compute[348325]: 2025-12-03 19:15:38.489 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:15:38 compute-0 nova_compute[348325]: 2025-12-03 19:15:38.490 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:15:38 compute-0 nova_compute[348325]: 2025-12-03 19:15:38.490 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:15:38 compute-0 nova_compute[348325]: 2025-12-03 19:15:38.506 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:38 compute-0 nova_compute[348325]: 2025-12-03 19:15:38.998 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:39 compute-0 nova_compute[348325]: 2025-12-03 19:15:39.091 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 19:15:39 compute-0 nova_compute[348325]: 2025-12-03 19:15:39.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:15:39 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:15:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:42 compute-0 ovn_controller[89305]: 2025-12-03T19:15:42Z|00186|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Dec  3 19:15:43 compute-0 nova_compute[348325]: 2025-12-03 19:15:43.511 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:43 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:15:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:15:44 compute-0 nova_compute[348325]: 2025-12-03 19:15:44.002 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:15:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:15:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:15:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:15:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:15:44 compute-0 podman[471173]: 2025-12-03 19:15:44.843203866 +0000 UTC m=+0.117805973 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 19:15:44 compute-0 podman[471172]: 2025-12-03 19:15:44.85684722 +0000 UTC m=+0.136594229 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 19:15:44 compute-0 podman[471171]: 2025-12-03 19:15:44.877057531 +0000 UTC m=+0.160657622 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Dec  3 19:15:45 compute-0 nova_compute[348325]: 2025-12-03 19:15:45.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:15:45 compute-0 nova_compute[348325]: 2025-12-03 19:15:45.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:15:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:48 compute-0 nova_compute[348325]: 2025-12-03 19:15:48.519 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:49 compute-0 nova_compute[348325]: 2025-12-03 19:15:49.006 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:15:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:50 compute-0 podman[471235]: 2025-12-03 19:15:50.962381362 +0000 UTC m=+0.120851464 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-type=git)
Dec  3 19:15:50 compute-0 podman[471237]: 2025-12-03 19:15:50.995274574 +0000 UTC m=+0.142665203 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 19:15:51 compute-0 podman[471236]: 2025-12-03 19:15:51.006096741 +0000 UTC m=+0.154797691 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:15:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:53 compute-0 nova_compute[348325]: 2025-12-03 19:15:53.525 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:54 compute-0 nova_compute[348325]: 2025-12-03 19:15:54.010 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:15:55 compute-0 nova_compute[348325]: 2025-12-03 19:15:55.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:15:55 compute-0 nova_compute[348325]: 2025-12-03 19:15:55.521 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:15:55 compute-0 nova_compute[348325]: 2025-12-03 19:15:55.521 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:15:55 compute-0 nova_compute[348325]: 2025-12-03 19:15:55.522 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:15:55 compute-0 nova_compute[348325]: 2025-12-03 19:15:55.522 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:15:55 compute-0 nova_compute[348325]: 2025-12-03 19:15:55.523 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:15:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:15:56 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1641718085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:15:56 compute-0 nova_compute[348325]: 2025-12-03 19:15:56.035 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:15:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:56 compute-0 nova_compute[348325]: 2025-12-03 19:15:56.648 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:15:56 compute-0 nova_compute[348325]: 2025-12-03 19:15:56.651 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4011MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:15:56 compute-0 nova_compute[348325]: 2025-12-03 19:15:56.652 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:15:56 compute-0 nova_compute[348325]: 2025-12-03 19:15:56.652 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:15:56 compute-0 nova_compute[348325]: 2025-12-03 19:15:56.738 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:15:56 compute-0 nova_compute[348325]: 2025-12-03 19:15:56.739 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:15:56 compute-0 nova_compute[348325]: 2025-12-03 19:15:56.917 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:15:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:15:57 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/33872502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:15:57 compute-0 nova_compute[348325]: 2025-12-03 19:15:57.438 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:15:57 compute-0 nova_compute[348325]: 2025-12-03 19:15:57.450 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:15:57 compute-0 nova_compute[348325]: 2025-12-03 19:15:57.484 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:15:57 compute-0 nova_compute[348325]: 2025-12-03 19:15:57.488 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:15:57 compute-0 nova_compute[348325]: 2025-12-03 19:15:57.489 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:15:57 compute-0 podman[471338]: 2025-12-03 19:15:57.988245718 +0000 UTC m=+0.130427562 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 19:15:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:15:58 compute-0 nova_compute[348325]: 2025-12-03 19:15:58.530 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:59 compute-0 nova_compute[348325]: 2025-12-03 19:15:59.014 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:15:59 compute-0 podman[158200]: time="2025-12-03T19:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:15:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:15:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8187 "" "Go-http-client/1.1"
Dec  3 19:15:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:16:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:16:01 compute-0 openstack_network_exporter[365222]: ERROR   19:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:16:01 compute-0 openstack_network_exporter[365222]: ERROR   19:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:16:01 compute-0 openstack_network_exporter[365222]: ERROR   19:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:16:01 compute-0 openstack_network_exporter[365222]: ERROR   19:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:16:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:16:01 compute-0 openstack_network_exporter[365222]: ERROR   19:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:16:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:16:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:16:03 compute-0 podman[471463]: 2025-12-03 19:16:03.248224885 +0000 UTC m=+0.151156515 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 19:16:03 compute-0 podman[471462]: 2025-12-03 19:16:03.291678638 +0000 UTC m=+0.204702078 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  3 19:16:03 compute-0 nova_compute[348325]: 2025-12-03 19:16:03.535 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:16:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:16:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:16:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:16:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:16:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:16:03 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev dc4c4a62-95ac-4888-ade4-fa82699964db does not exist
Dec  3 19:16:03 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 15ec0ccb-92fa-4e51-8dee-4634a2272e10 does not exist
Dec  3 19:16:03 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 62ec2936-e671-4c60-b15d-6d88a421f4ed does not exist
Dec  3 19:16:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:16:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:16:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:16:03 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:16:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:16:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:16:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:16:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:16:03 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:16:04 compute-0 nova_compute[348325]: 2025-12-03 19:16:04.018 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:16:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:16:05 compute-0 podman[471675]: 2025-12-03 19:16:05.007805232 +0000 UTC m=+0.096530806 container create efd111189fad7edf665852d4ddda15a0c01d7e9dbc3b3ab8f5b7c303c31a95a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:16:05 compute-0 podman[471675]: 2025-12-03 19:16:04.969303347 +0000 UTC m=+0.058028971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:16:05 compute-0 systemd[1]: Started libpod-conmon-efd111189fad7edf665852d4ddda15a0c01d7e9dbc3b3ab8f5b7c303c31a95a5.scope.
Dec  3 19:16:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:16:05 compute-0 podman[471675]: 2025-12-03 19:16:05.176007992 +0000 UTC m=+0.264733606 container init efd111189fad7edf665852d4ddda15a0c01d7e9dbc3b3ab8f5b7c303c31a95a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:16:05 compute-0 podman[471675]: 2025-12-03 19:16:05.196380566 +0000 UTC m=+0.285106140 container start efd111189fad7edf665852d4ddda15a0c01d7e9dbc3b3ab8f5b7c303c31a95a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:16:05 compute-0 podman[471675]: 2025-12-03 19:16:05.203305561 +0000 UTC m=+0.292031135 container attach efd111189fad7edf665852d4ddda15a0c01d7e9dbc3b3ab8f5b7c303c31a95a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 19:16:05 compute-0 gifted_ptolemy[471691]: 167 167
Dec  3 19:16:05 compute-0 systemd[1]: libpod-efd111189fad7edf665852d4ddda15a0c01d7e9dbc3b3ab8f5b7c303c31a95a5.scope: Deactivated successfully.
Dec  3 19:16:05 compute-0 podman[471675]: 2025-12-03 19:16:05.20873707 +0000 UTC m=+0.297462674 container died efd111189fad7edf665852d4ddda15a0c01d7e9dbc3b3ab8f5b7c303c31a95a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-05e70f161d000563d8929eeb29aa730188e694238e2d2ded1c9edcc9bacea3b6-merged.mount: Deactivated successfully.
Dec  3 19:16:05 compute-0 podman[471675]: 2025-12-03 19:16:05.289764607 +0000 UTC m=+0.378490141 container remove efd111189fad7edf665852d4ddda15a0c01d7e9dbc3b3ab8f5b7c303c31a95a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 19:16:05 compute-0 systemd[1]: libpod-conmon-efd111189fad7edf665852d4ddda15a0c01d7e9dbc3b3ab8f5b7c303c31a95a5.scope: Deactivated successfully.
Dec  3 19:16:05 compute-0 podman[471714]: 2025-12-03 19:16:05.565594195 +0000 UTC m=+0.090387390 container create 9a665b5c233ddcbc0f4808ef5f2a95d118cd53cfefa83136149b0b0d8282465d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:16:05 compute-0 podman[471714]: 2025-12-03 19:16:05.52542839 +0000 UTC m=+0.050221625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:16:05 compute-0 systemd[1]: Started libpod-conmon-9a665b5c233ddcbc0f4808ef5f2a95d118cd53cfefa83136149b0b0d8282465d.scope.
Dec  3 19:16:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372faba3d863ebe5f50e74507b8e80a498123141a825e3bac0ef816e4eb8f213/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372faba3d863ebe5f50e74507b8e80a498123141a825e3bac0ef816e4eb8f213/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372faba3d863ebe5f50e74507b8e80a498123141a825e3bac0ef816e4eb8f213/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372faba3d863ebe5f50e74507b8e80a498123141a825e3bac0ef816e4eb8f213/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372faba3d863ebe5f50e74507b8e80a498123141a825e3bac0ef816e4eb8f213/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:05 compute-0 podman[471714]: 2025-12-03 19:16:05.761553135 +0000 UTC m=+0.286346370 container init 9a665b5c233ddcbc0f4808ef5f2a95d118cd53cfefa83136149b0b0d8282465d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 19:16:05 compute-0 podman[471714]: 2025-12-03 19:16:05.784653344 +0000 UTC m=+0.309446529 container start 9a665b5c233ddcbc0f4808ef5f2a95d118cd53cfefa83136149b0b0d8282465d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 19:16:05 compute-0 podman[471714]: 2025-12-03 19:16:05.7912294 +0000 UTC m=+0.316022585 container attach 9a665b5c233ddcbc0f4808ef5f2a95d118cd53cfefa83136149b0b0d8282465d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:16:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:16:07 compute-0 fervent_ptolemy[471731]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:16:07 compute-0 fervent_ptolemy[471731]: --> relative data size: 1.0
Dec  3 19:16:07 compute-0 fervent_ptolemy[471731]: --> All data devices are unavailable
Dec  3 19:16:07 compute-0 systemd[1]: libpod-9a665b5c233ddcbc0f4808ef5f2a95d118cd53cfefa83136149b0b0d8282465d.scope: Deactivated successfully.
Dec  3 19:16:07 compute-0 systemd[1]: libpod-9a665b5c233ddcbc0f4808ef5f2a95d118cd53cfefa83136149b0b0d8282465d.scope: Consumed 1.285s CPU time.
Dec  3 19:16:07 compute-0 podman[471762]: 2025-12-03 19:16:07.167712029 +0000 UTC m=+0.035546976 container died 9a665b5c233ddcbc0f4808ef5f2a95d118cd53cfefa83136149b0b0d8282465d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 19:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-372faba3d863ebe5f50e74507b8e80a498123141a825e3bac0ef816e4eb8f213-merged.mount: Deactivated successfully.
Dec  3 19:16:07 compute-0 podman[471762]: 2025-12-03 19:16:07.283242506 +0000 UTC m=+0.151077473 container remove 9a665b5c233ddcbc0f4808ef5f2a95d118cd53cfefa83136149b0b0d8282465d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 19:16:07 compute-0 systemd[1]: libpod-conmon-9a665b5c233ddcbc0f4808ef5f2a95d118cd53cfefa83136149b0b0d8282465d.scope: Deactivated successfully.
Dec  3 19:16:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:16:08 compute-0 nova_compute[348325]: 2025-12-03 19:16:08.540 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:08 compute-0 podman[471913]: 2025-12-03 19:16:08.581976116 +0000 UTC m=+0.100219673 container create 02446dda405b4b801690160a42d79e36ecc75b8bcb0f7524fa268c5ae02382c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 19:16:08 compute-0 podman[471913]: 2025-12-03 19:16:08.547717282 +0000 UTC m=+0.065960869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:16:08 compute-0 systemd[1]: Started libpod-conmon-02446dda405b4b801690160a42d79e36ecc75b8bcb0f7524fa268c5ae02382c4.scope.
Dec  3 19:16:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:16:08 compute-0 podman[471913]: 2025-12-03 19:16:08.72724514 +0000 UTC m=+0.245488737 container init 02446dda405b4b801690160a42d79e36ecc75b8bcb0f7524fa268c5ae02382c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 19:16:08 compute-0 podman[471913]: 2025-12-03 19:16:08.746386736 +0000 UTC m=+0.264630273 container start 02446dda405b4b801690160a42d79e36ecc75b8bcb0f7524fa268c5ae02382c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:16:08 compute-0 podman[471913]: 2025-12-03 19:16:08.752334607 +0000 UTC m=+0.270578364 container attach 02446dda405b4b801690160a42d79e36ecc75b8bcb0f7524fa268c5ae02382c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:16:08 compute-0 infallible_carson[471929]: 167 167
Dec  3 19:16:08 compute-0 systemd[1]: libpod-02446dda405b4b801690160a42d79e36ecc75b8bcb0f7524fa268c5ae02382c4.scope: Deactivated successfully.
Dec  3 19:16:08 compute-0 podman[471913]: 2025-12-03 19:16:08.757622443 +0000 UTC m=+0.275866020 container died 02446dda405b4b801690160a42d79e36ecc75b8bcb0f7524fa268c5ae02382c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd518fed11e788a801471584cb74a38e46cfc384c5e2972d231e022453ba1627-merged.mount: Deactivated successfully.
Dec  3 19:16:08 compute-0 podman[471913]: 2025-12-03 19:16:08.835953965 +0000 UTC m=+0.354197522 container remove 02446dda405b4b801690160a42d79e36ecc75b8bcb0f7524fa268c5ae02382c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Dec  3 19:16:08 compute-0 systemd[1]: libpod-conmon-02446dda405b4b801690160a42d79e36ecc75b8bcb0f7524fa268c5ae02382c4.scope: Deactivated successfully.
Dec  3 19:16:09 compute-0 nova_compute[348325]: 2025-12-03 19:16:09.021 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:09 compute-0 podman[471951]: 2025-12-03 19:16:09.105694649 +0000 UTC m=+0.052483439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:16:09 compute-0 podman[471951]: 2025-12-03 19:16:09.221126014 +0000 UTC m=+0.167914764 container create 3cd46b59f9ec0fd78c3c0385f0d09ad693dad5fd74d40151f905e4dd41edf203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:16:09 compute-0 systemd[1]: Started libpod-conmon-3cd46b59f9ec0fd78c3c0385f0d09ad693dad5fd74d40151f905e4dd41edf203.scope.
Dec  3 19:16:09 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a28a08b1138e89135ba1ba7207cd507555fb60a681d795e5f0939c252fe078/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a28a08b1138e89135ba1ba7207cd507555fb60a681d795e5f0939c252fe078/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a28a08b1138e89135ba1ba7207cd507555fb60a681d795e5f0939c252fe078/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a28a08b1138e89135ba1ba7207cd507555fb60a681d795e5f0939c252fe078/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:09 compute-0 podman[471951]: 2025-12-03 19:16:09.400209522 +0000 UTC m=+0.346998272 container init 3cd46b59f9ec0fd78c3c0385f0d09ad693dad5fd74d40151f905e4dd41edf203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 19:16:09 compute-0 podman[471951]: 2025-12-03 19:16:09.425120443 +0000 UTC m=+0.371909163 container start 3cd46b59f9ec0fd78c3c0385f0d09ad693dad5fd74d40151f905e4dd41edf203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 19:16:09 compute-0 podman[471951]: 2025-12-03 19:16:09.430621674 +0000 UTC m=+0.377410474 container attach 3cd46b59f9ec0fd78c3c0385f0d09ad693dad5fd74d40151f905e4dd41edf203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:16:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.844130) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789369844170, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1379, "num_deletes": 259, "total_data_size": 2062107, "memory_usage": 2094768, "flush_reason": "Manual Compaction"}
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789369858668, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 2041292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46325, "largest_seqno": 47703, "table_properties": {"data_size": 2034800, "index_size": 3695, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13641, "raw_average_key_size": 19, "raw_value_size": 2021570, "raw_average_value_size": 2929, "num_data_blocks": 165, "num_entries": 690, "num_filter_entries": 690, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764789233, "oldest_key_time": 1764789233, "file_creation_time": 1764789369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 14593 microseconds, and 6427 cpu microseconds.
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.858720) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 2041292 bytes OK
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.858736) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.861046) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.861059) EVENT_LOG_v1 {"time_micros": 1764789369861055, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.861071) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2055927, prev total WAL file size 2055927, number of live WAL files 2.
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.862006) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373537' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1993KB)], [110(7691KB)]
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789369862095, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 9917607, "oldest_snapshot_seqno": -1}
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6272 keys, 9806324 bytes, temperature: kUnknown
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789369926367, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9806324, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9764666, "index_size": 24908, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 163352, "raw_average_key_size": 26, "raw_value_size": 9651492, "raw_average_value_size": 1538, "num_data_blocks": 996, "num_entries": 6272, "num_filter_entries": 6272, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764789369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.926629) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9806324 bytes
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.929730) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.1 rd, 152.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.5 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(9.7) write-amplify(4.8) OK, records in: 6805, records dropped: 533 output_compression: NoCompression
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.929745) EVENT_LOG_v1 {"time_micros": 1764789369929738, "job": 66, "event": "compaction_finished", "compaction_time_micros": 64364, "compaction_time_cpu_micros": 22974, "output_level": 6, "num_output_files": 1, "total_output_size": 9806324, "num_input_records": 6805, "num_output_records": 6272, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789369930210, "job": 66, "event": "table_file_deletion", "file_number": 112}
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789369931800, "job": 66, "event": "table_file_deletion", "file_number": 110}
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.861878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.931975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.931980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.931983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.931985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:16:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:16:09.931987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]: {
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:    "0": [
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:        {
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "devices": [
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "/dev/loop3"
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            ],
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_name": "ceph_lv0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_size": "21470642176",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "name": "ceph_lv0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "tags": {
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.cluster_name": "ceph",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.crush_device_class": "",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.encrypted": "0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.osd_id": "0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.type": "block",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.vdo": "0"
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            },
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "type": "block",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "vg_name": "ceph_vg0"
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:        }
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:    ],
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:    "1": [
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:        {
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "devices": [
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "/dev/loop4"
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            ],
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_name": "ceph_lv1",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_size": "21470642176",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "name": "ceph_lv1",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "tags": {
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.cluster_name": "ceph",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.crush_device_class": "",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.encrypted": "0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.osd_id": "1",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.type": "block",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.vdo": "0"
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            },
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "type": "block",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "vg_name": "ceph_vg1"
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:        }
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:    ],
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:    "2": [
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:        {
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "devices": [
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "/dev/loop5"
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            ],
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_name": "ceph_lv2",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_size": "21470642176",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "name": "ceph_lv2",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "tags": {
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.cluster_name": "ceph",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.crush_device_class": "",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.encrypted": "0",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.osd_id": "2",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.type": "block",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:                "ceph.vdo": "0"
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            },
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "type": "block",
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:            "vg_name": "ceph_vg2"
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:        }
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]:    ]
Dec  3 19:16:10 compute-0 vigorous_wiles[471968]: }
Dec  3 19:16:10 compute-0 systemd[1]: libpod-3cd46b59f9ec0fd78c3c0385f0d09ad693dad5fd74d40151f905e4dd41edf203.scope: Deactivated successfully.
Dec  3 19:16:10 compute-0 podman[471951]: 2025-12-03 19:16:10.283004502 +0000 UTC m=+1.229793232 container died 3cd46b59f9ec0fd78c3c0385f0d09ad693dad5fd74d40151f905e4dd41edf203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7a28a08b1138e89135ba1ba7207cd507555fb60a681d795e5f0939c252fe078-merged.mount: Deactivated successfully.
Dec  3 19:16:10 compute-0 podman[471951]: 2025-12-03 19:16:10.377098499 +0000 UTC m=+1.323887219 container remove 3cd46b59f9ec0fd78c3c0385f0d09ad693dad5fd74d40151f905e4dd41edf203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:16:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Dec  3 19:16:10 compute-0 systemd[1]: libpod-conmon-3cd46b59f9ec0fd78c3c0385f0d09ad693dad5fd74d40151f905e4dd41edf203.scope: Deactivated successfully.
Dec  3 19:16:11 compute-0 podman[472126]: 2025-12-03 19:16:11.543329128 +0000 UTC m=+0.083104476 container create c964662d8ee4c2ef2f30b9d41c1afd600b280d6b5b95aa07be51586d4bf61dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:16:11 compute-0 podman[472126]: 2025-12-03 19:16:11.513356586 +0000 UTC m=+0.053131964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:16:11 compute-0 systemd[1]: Started libpod-conmon-c964662d8ee4c2ef2f30b9d41c1afd600b280d6b5b95aa07be51586d4bf61dae.scope.
Dec  3 19:16:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:16:11 compute-0 podman[472126]: 2025-12-03 19:16:11.685637112 +0000 UTC m=+0.225412530 container init c964662d8ee4c2ef2f30b9d41c1afd600b280d6b5b95aa07be51586d4bf61dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Dec  3 19:16:11 compute-0 podman[472126]: 2025-12-03 19:16:11.703740063 +0000 UTC m=+0.243515431 container start c964662d8ee4c2ef2f30b9d41c1afd600b280d6b5b95aa07be51586d4bf61dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:16:11 compute-0 podman[472126]: 2025-12-03 19:16:11.710124364 +0000 UTC m=+0.249899782 container attach c964662d8ee4c2ef2f30b9d41c1afd600b280d6b5b95aa07be51586d4bf61dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:16:11 compute-0 intelligent_haslett[472140]: 167 167
Dec  3 19:16:11 compute-0 systemd[1]: libpod-c964662d8ee4c2ef2f30b9d41c1afd600b280d6b5b95aa07be51586d4bf61dae.scope: Deactivated successfully.
Dec  3 19:16:11 compute-0 podman[472126]: 2025-12-03 19:16:11.714802966 +0000 UTC m=+0.254578334 container died c964662d8ee4c2ef2f30b9d41c1afd600b280d6b5b95aa07be51586d4bf61dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 19:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dad1064d9d7a4756acef5c35240d957520d0ffc8c3bcbe0f01a10a47a524743-merged.mount: Deactivated successfully.
Dec  3 19:16:11 compute-0 podman[472126]: 2025-12-03 19:16:11.786799168 +0000 UTC m=+0.326574516 container remove c964662d8ee4c2ef2f30b9d41c1afd600b280d6b5b95aa07be51586d4bf61dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 19:16:11 compute-0 systemd[1]: libpod-conmon-c964662d8ee4c2ef2f30b9d41c1afd600b280d6b5b95aa07be51586d4bf61dae.scope: Deactivated successfully.
Dec  3 19:16:12 compute-0 podman[472166]: 2025-12-03 19:16:12.016800506 +0000 UTC m=+0.052910968 container create f91a05d75d409e85ff0bdc6819ccbca87b2efaf848ebcfd8db6148ef1405b99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec  3 19:16:12 compute-0 systemd[1]: Started libpod-conmon-f91a05d75d409e85ff0bdc6819ccbca87b2efaf848ebcfd8db6148ef1405b99f.scope.
Dec  3 19:16:12 compute-0 podman[472166]: 2025-12-03 19:16:11.996869502 +0000 UTC m=+0.032979984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:16:12 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56f55e2fd72ecd70c023c60f48f83ae03035d8d10963b9d63e65c20d621248e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56f55e2fd72ecd70c023c60f48f83ae03035d8d10963b9d63e65c20d621248e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56f55e2fd72ecd70c023c60f48f83ae03035d8d10963b9d63e65c20d621248e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56f55e2fd72ecd70c023c60f48f83ae03035d8d10963b9d63e65c20d621248e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:16:12 compute-0 podman[472166]: 2025-12-03 19:16:12.133777648 +0000 UTC m=+0.169888130 container init f91a05d75d409e85ff0bdc6819ccbca87b2efaf848ebcfd8db6148ef1405b99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 19:16:12 compute-0 podman[472166]: 2025-12-03 19:16:12.142986056 +0000 UTC m=+0.179096518 container start f91a05d75d409e85ff0bdc6819ccbca87b2efaf848ebcfd8db6148ef1405b99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:16:12 compute-0 podman[472166]: 2025-12-03 19:16:12.147032763 +0000 UTC m=+0.183143255 container attach f91a05d75d409e85ff0bdc6819ccbca87b2efaf848ebcfd8db6148ef1405b99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 19:16:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Dec  3 19:16:13 compute-0 boring_wescoff[472180]: {
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "osd_id": 1,
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "type": "bluestore"
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:    },
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "osd_id": 2,
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "type": "bluestore"
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:    },
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "osd_id": 0,
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:        "type": "bluestore"
Dec  3 19:16:13 compute-0 boring_wescoff[472180]:    }
Dec  3 19:16:13 compute-0 boring_wescoff[472180]: }
Dec  3 19:16:13 compute-0 systemd[1]: libpod-f91a05d75d409e85ff0bdc6819ccbca87b2efaf848ebcfd8db6148ef1405b99f.scope: Deactivated successfully.
Dec  3 19:16:13 compute-0 podman[472166]: 2025-12-03 19:16:13.418946016 +0000 UTC m=+1.455056518 container died f91a05d75d409e85ff0bdc6819ccbca87b2efaf848ebcfd8db6148ef1405b99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 19:16:13 compute-0 systemd[1]: libpod-f91a05d75d409e85ff0bdc6819ccbca87b2efaf848ebcfd8db6148ef1405b99f.scope: Consumed 1.262s CPU time.
Dec  3 19:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a56f55e2fd72ecd70c023c60f48f83ae03035d8d10963b9d63e65c20d621248e-merged.mount: Deactivated successfully.
Dec  3 19:16:13 compute-0 podman[472166]: 2025-12-03 19:16:13.520284305 +0000 UTC m=+1.556394797 container remove f91a05d75d409e85ff0bdc6819ccbca87b2efaf848ebcfd8db6148ef1405b99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 19:16:13 compute-0 systemd[1]: libpod-conmon-f91a05d75d409e85ff0bdc6819ccbca87b2efaf848ebcfd8db6148ef1405b99f.scope: Deactivated successfully.
Dec  3 19:16:13 compute-0 nova_compute[348325]: 2025-12-03 19:16:13.545 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:16:13 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:16:13 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:16:13 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:16:13 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 14300981-0b31-4a99-87cf-ec8d385fced6 does not exist
Dec  3 19:16:13 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 1c049270-d73b-4a8b-a6fb-bbb1c2f6e0ff does not exist
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:16:14 compute-0 nova_compute[348325]: 2025-12-03 19:16:14.025 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:16:14
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.meta', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.control']
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Dec  3 19:16:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:16:14 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:16:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:16:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:16:15 compute-0 systemd-logind[784]: New session 64 of user zuul.
Dec  3 19:16:15 compute-0 systemd[1]: Started Session 64 of User zuul.
Dec  3 19:16:15 compute-0 podman[472281]: 2025-12-03 19:16:15.299425658 +0000 UTC m=+0.129821998 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 19:16:15 compute-0 podman[472280]: 2025-12-03 19:16:15.314647191 +0000 UTC m=+0.154945876 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 19:16:15 compute-0 podman[472278]: 2025-12-03 19:16:15.322247731 +0000 UTC m=+0.166520071 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 19:16:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec  3 19:16:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 19:16:18 compute-0 nova_compute[348325]: 2025-12-03 19:16:18.551 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:18 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15549 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:19 compute-0 nova_compute[348325]: 2025-12-03 19:16:19.027 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:19 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15551 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:16:20 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  3 19:16:20 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/787387007' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  3 19:16:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 19:16:21 compute-0 podman[472595]: 2025-12-03 19:16:21.976814228 +0000 UTC m=+0.118949690 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:16:21 compute-0 podman[472593]: 2025-12-03 19:16:21.987797589 +0000 UTC m=+0.137225754 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release=1214.1726694543, io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Dec  3 19:16:21 compute-0 podman[472594]: 2025-12-03 19:16:21.995900272 +0000 UTC m=+0.141580819 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:16:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Dec  3 19:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:16:23.376 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:16:23.377 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:16:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:16:23.378 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:16:23 compute-0 ovs-vsctl[472675]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  3 19:16:23 compute-0 nova_compute[348325]: 2025-12-03 19:16:23.554 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:24 compute-0 nova_compute[348325]: 2025-12-03 19:16:24.032 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Dec  3 19:16:24 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:16:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:16:25 compute-0 virtqemud[138705]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  3 19:16:25 compute-0 virtqemud[138705]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  3 19:16:25 compute-0 virtqemud[138705]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  3 19:16:25 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: cache status {prefix=cache status} (starting...)
Dec  3 19:16:26 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: client ls {prefix=client ls} (starting...)
Dec  3 19:16:26 compute-0 lvm[473006]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 19:16:26 compute-0 lvm[473006]: VG ceph_vg0 finished
Dec  3 19:16:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec  3 19:16:26 compute-0 lvm[473031]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 19:16:26 compute-0 lvm[473031]: VG ceph_vg1 finished
Dec  3 19:16:26 compute-0 lvm[473052]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 19:16:26 compute-0 lvm[473052]: VG ceph_vg2 finished
Dec  3 19:16:26 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: damage ls {prefix=damage ls} (starting...)
Dec  3 19:16:26 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump loads {prefix=dump loads} (starting...)
Dec  3 19:16:27 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15555 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:27 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  3 19:16:27 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  3 19:16:27 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  3 19:16:27 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15557 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:27 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  3 19:16:27 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  3 19:16:27 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  3 19:16:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Dec  3 19:16:27 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028982425' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  3 19:16:28 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: ops {prefix=ops} (starting...)
Dec  3 19:16:28 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15563 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:28 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T19:16:28.259+0000 7ff6bdbb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 19:16:28 compute-0 ceph-mgr[193091]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 19:16:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:16:28 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3516620524' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:16:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Dec  3 19:16:28 compute-0 nova_compute[348325]: 2025-12-03 19:16:28.558 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec  3 19:16:28 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980631073' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  3 19:16:28 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec  3 19:16:28 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280864318' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  3 19:16:28 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: session ls {prefix=session ls} (starting...)
Dec  3 19:16:28 compute-0 podman[473366]: 2025-12-03 19:16:28.923862189 +0000 UTC m=+0.091754523 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 19:16:29 compute-0 nova_compute[348325]: 2025-12-03 19:16:29.034 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:29 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: status {prefix=status} (starting...)
Dec  3 19:16:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  3 19:16:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2435595950' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  3 19:16:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec  3 19:16:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/864724583' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  3 19:16:29 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15577 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  3 19:16:29 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/211315828' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  3 19:16:29 compute-0 podman[158200]: time="2025-12-03T19:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:16:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:16:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8189 "" "Go-http-client/1.1"
Dec  3 19:16:29 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:16:30 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15579 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  3 19:16:30 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3503012873' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  3 19:16:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:16:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Dec  3 19:16:30 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176611163' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  3 19:16:30 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 19:16:30 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012053717' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 19:16:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  3 19:16:31 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711608312' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  3 19:16:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec  3 19:16:31 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412827124' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  3 19:16:31 compute-0 openstack_network_exporter[365222]: ERROR   19:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:16:31 compute-0 openstack_network_exporter[365222]: ERROR   19:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:16:31 compute-0 openstack_network_exporter[365222]: ERROR   19:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:16:31 compute-0 openstack_network_exporter[365222]: ERROR   19:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:16:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:16:31 compute-0 openstack_network_exporter[365222]: ERROR   19:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:16:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:16:31 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15593 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:31 compute-0 ceph-mgr[193091]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 19:16:31 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T19:16:31.588+0000 7ff6bdbb5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 19:16:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  3 19:16:31 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/616949934' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  3 19:16:32 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15597 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec  3 19:16:32 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500365874' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  3 19:16:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:16:32 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15600 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:32 compute-0 nova_compute[348325]: 2025-12-03 19:16:32.489 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:16:32 compute-0 nova_compute[348325]: 2025-12-03 19:16:32.490 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:16:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec  3 19:16:32 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1903414398' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8fb0000/0x0/0x4ffc00000, data 0x2a07d88/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 5496832 heap: 103546880 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8fb0000/0x0/0x4ffc00000, data 0x2a07d88/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 5496832 heap: 103546880 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304727 data_alloc: 234881024 data_used: 14602240
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 5496832 heap: 103546880 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 5496832 heap: 103546880 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 5496832 heap: 103546880 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 5496832 heap: 103546880 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 5496832 heap: 103546880 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304727 data_alloc: 234881024 data_used: 14602240
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8fb0000/0x0/0x4ffc00000, data 0x2a07d88/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7ed000 session 0x55ab9c16d2c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9ce09400 session 0x55ab9d7d1860
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7ea800 session 0x55ab9d7d0960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 5496832 heap: 103546880 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9bfc4400 session 0x55ab9d7d0b40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.653400421s of 49.668117523s, submitted: 2
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9cd45000 session 0x55ab9d7d1c20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9bfc4400 session 0x55ab9ded30e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9ce09400 session 0x55ab9da5a000
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7ea800 session 0x55ab9d2e10e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7ed000 session 0x55ab9da2cf00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9b14f400 session 0x55ab9da5b4a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9bfc4400 session 0x55ab9bb95680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9ce09400 session 0x55ab9bb99e00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99131392 unmapped: 8617984 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7ea800 session 0x55ab9bb98d20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7f1400 session 0x55ab9d09d680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9ce09000 session 0x55ab9e72c960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9bfc4400 session 0x55ab9d7d0960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 8642560 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 8642560 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 8642560 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341537 data_alloc: 234881024 data_used: 14602240
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 8642560 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 8642560 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9d7d0f00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 8634368 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7f2800 session 0x55ab9da5a000
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7f7400 session 0x55ab9da5b4a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 6698 writes, 27K keys, 6698 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6698 writes, 1312 syncs, 5.11 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 812 writes, 2947 keys, 812 commit groups, 1.0 writes per commit group, ingest: 3.15 MB, 0.01 MB/s#012Interval WAL: 812 writes, 311 syncs, 2.61 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 8634368 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7ef800 session 0x55ab9ded30e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99180544 unmapped: 8568832 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341609 data_alloc: 234881024 data_used: 14606336
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99180544 unmapped: 8568832 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 7233536 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 6586368 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 6586368 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365129 data_alloc: 234881024 data_used: 17911808
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 6586368 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 6586368 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 6586368 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 6586368 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 6586368 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365129 data_alloc: 234881024 data_used: 17911808
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101195776 unmapped: 6553600 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 6520832 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 6520832 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 6520832 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 6520832 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365129 data_alloc: 234881024 data_used: 17911808
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 6520832 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 6520832 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101236736 unmapped: 6512640 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101236736 unmapped: 6512640 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101236736 unmapped: 6512640 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365129 data_alloc: 234881024 data_used: 17911808
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101236736 unmapped: 6512640 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 6479872 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 6479872 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 6479872 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 6479872 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365129 data_alloc: 234881024 data_used: 17911808
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 6479872 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 6447104 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 6447104 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 6447104 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 6447104 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365129 data_alloc: 234881024 data_used: 17911808
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 6447104 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 6447104 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 6447104 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 6447104 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 6447104 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365129 data_alloc: 234881024 data_used: 17911808
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 6447104 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8b87000/0x0/0x4ffc00000, data 0x2e30d88/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 6447104 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 45.823135376s of 45.976428986s, submitted: 27
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 6406144 heap: 107749376 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 102768640 unmapped: 7569408 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 8339456 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408207 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 9445376 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f867c000/0x0/0x4ffc00000, data 0x333ad88/0x3401000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 9445376 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 9445376 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 9445376 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f867c000/0x0/0x4ffc00000, data 0x333ad88/0x3401000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 9445376 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406635 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406635 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406635 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 9355264 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406635 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406635 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.592727661s of 32.867519379s, submitted: 45
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406635 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406635 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406635 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406635 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406635 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 9338880 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.945199966s of 21.952770233s, submitted: 1
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 9347072 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,0,1])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 9289728 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406355 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101089280 unmapped: 9248768 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 9240576 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101105664 unmapped: 9232384 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9224192 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 9216000 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 9216000 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 9216000 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101130240 unmapped: 9207808 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101130240 unmapped: 9207808 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101130240 unmapped: 9207808 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 9199616 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 9199616 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 9199616 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 9199616 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 9199616 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 9199616 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 9199616 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 9199616 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 9199616 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 9199616 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 9191424 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101154816 unmapped: 9183232 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101154816 unmapped: 9183232 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101154816 unmapped: 9183232 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101154816 unmapped: 9183232 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101154816 unmapped: 9183232 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101154816 unmapped: 9183232 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101154816 unmapped: 9183232 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101154816 unmapped: 9183232 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101154816 unmapped: 9183232 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101154816 unmapped: 9183232 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101154816 unmapped: 9183232 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8666000/0x0/0x4ffc00000, data 0x3351d88/0x3418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406283 data_alloc: 234881024 data_used: 17952768
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9175040 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 126.301712036s of 126.831199646s, submitted: 90
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7e5400 session 0x55ab9d09c960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d159c00 session 0x55ab9d939e00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9b305c00 session 0x55ab9d9390e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 10821632 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7f5c00 session 0x55ab9af992c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 10821632 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 10821632 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 10821632 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 10821632 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 10813440 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 10805248 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15603 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 10797056 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 10788864 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 10788864 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 10788864 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 10788864 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 10788864 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 10788864 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 10788864 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 10788864 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 10788864 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 10788864 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 10780672 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 10780672 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9875000/0x0/0x4ffc00000, data 0x2142d88/0x2209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233161 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 10780672 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 10780672 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 120.393020630s of 120.551101685s, submitted: 25
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 10780672 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d7f1c00 session 0x55ab9da2c780
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9bfc4800 session 0x55ab9d7d12c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9d159000 session 0x55ab9c07a5a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 17924096 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 ms_handle_reset con 0x55ab9b305c00 session 0x55ab9bf94f00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038756 data_alloc: 218103808 data_used: 3727360
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038756 data_alloc: 218103808 data_used: 3727360
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038756 data_alloc: 218103808 data_used: 3727360
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038756 data_alloc: 218103808 data_used: 3727360
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038756 data_alloc: 218103808 data_used: 3727360
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038756 data_alloc: 218103808 data_used: 3727360
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 16834560 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.544889450s of 31.718055725s, submitted: 37
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa9cf000/0x0/0x4ffc00000, data 0xfead16/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94609408 unmapped: 15728640 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042176 data_alloc: 218103808 data_used: 3727360
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94609408 unmapped: 15728640 heap: 110338048 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 32473088 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 32473088 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 32473088 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d158c00 session 0x55ab9bba0780
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 32464896 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa1c9000/0x0/0x4ffc00000, data 0x17ec8c6/0x18b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102729 data_alloc: 218103808 data_used: 3735552
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 32464896 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa1c9000/0x0/0x4ffc00000, data 0x17ec8c6/0x18b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 32464896 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 32464896 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 32464896 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa1c9000/0x0/0x4ffc00000, data 0x17ec8c6/0x18b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 32464896 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102729 data_alloc: 218103808 data_used: 3735552
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 32464896 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa1c9000/0x0/0x4ffc00000, data 0x17ec8c6/0x18b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d7ebc00 session 0x55ab9d7c05a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9b305c00 session 0x55ab9ded2f00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9bfc4800 session 0x55ab9c2ba5a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 32464896 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa1c9000/0x0/0x4ffc00000, data 0x17ec8c6/0x18b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.169572830s of 13.312438011s, submitted: 17
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d158c00 session 0x55ab9e9654a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 102531072 unmapped: 24592384 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d159000 session 0x55ab9d2f3e00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d7f2000 session 0x55ab9c16c3c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9bfc4800 session 0x55ab9bd68960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101941248 unmapped: 25182208 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d158c00 session 0x55ab9d09c5a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d159000 session 0x55ab9bb99680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d7f5c00 session 0x55ab9b1da780
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d7ef000 session 0x55ab9c166000
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9bfc4800 session 0x55ab9da5a780
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d158c00 session 0x55ab9bf945a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d159000 session 0x55ab9d7a94a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d7f5c00 session 0x55ab9b42a1e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101761024 unmapped: 25362432 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9cd44400 session 0x55ab9bba1860
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9bfc4800 session 0x55ab9d938d20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d158c00 session 0x55ab9c2bbc20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d159000 session 0x55ab9d09d2c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151519 data_alloc: 234881024 data_used: 10526720
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101761024 unmapped: 25362432 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101761024 unmapped: 25362432 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9cdab000 session 0x55ab9c2bb680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101761024 unmapped: 25362432 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4f9f80000/0x0/0x4ffc00000, data 0x1a33948/0x1afe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4f9f80000/0x0/0x4ffc00000, data 0x1a33948/0x1afe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101761024 unmapped: 25362432 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101761024 unmapped: 25362432 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163971 data_alloc: 234881024 data_used: 12275712
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4f9f80000/0x0/0x4ffc00000, data 0x1a33948/0x1afe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 25354240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4f9f80000/0x0/0x4ffc00000, data 0x1a33948/0x1afe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4f9f80000/0x0/0x4ffc00000, data 0x1a33948/0x1afe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166371 data_alloc: 234881024 data_used: 12558336
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166531 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4f9f80000/0x0/0x4ffc00000, data 0x1a33948/0x1afe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166531 data_alloc: 234881024 data_used: 12562432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 25378816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4f9f80000/0x0/0x4ffc00000, data 0x1a33948/0x1afe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.783391953s of 23.964035034s, submitted: 33
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9b302400 session 0x55ab9d2c45a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9cd42000 session 0x55ab9d7a9e00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9d15a000 session 0x55ab9e9445a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100327424 unmapped: 26796032 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 ms_handle_reset con 0x55ab9b302400 session 0x55ab9d766960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 26779648 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa1c9000/0x0/0x4ffc00000, data 0x17ec8c6/0x18b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 26779648 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 26779648 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa1c9000/0x0/0x4ffc00000, data 0x17ec8c6/0x18b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fa1ca000/0x0/0x4ffc00000, data 0x17ec8c6/0x18b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,1])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131819 data_alloc: 234881024 data_used: 10530816
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 130 ms_handle_reset con 0x55ab9bfc4800 session 0x55ab9cf465a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 32612352 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 32612352 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 32612352 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 130 ms_handle_reset con 0x55ab9d7ecc00 session 0x55ab9d938780
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 32595968 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 32595968 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1056947 data_alloc: 218103808 data_used: 3715072
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 32595968 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fa9c8000/0x0/0x4ffc00000, data 0xfee464/0x10b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 32579584 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 32571392 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 32571392 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fa9c5000/0x0/0x4ffc00000, data 0xfefec7/0x10b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 32571392 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059921 data_alloc: 218103808 data_used: 3715072
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 32571392 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.057926178s of 15.443591118s, submitted: 83
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 32571392 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 32571392 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fa9c4000/0x0/0x4ffc00000, data 0xfefef4/0x10ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 32571392 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fa9c4000/0x0/0x4ffc00000, data 0xfefef4/0x10ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94576640 unmapped: 32546816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062578 data_alloc: 218103808 data_used: 3715072
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94576640 unmapped: 32546816 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94609408 unmapped: 32514048 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 132 ms_handle_reset con 0x55ab9b303800 session 0x55ab9bf95680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 32456704 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 32456704 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 32456704 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fa233000/0x0/0x4ffc00000, data 0x177d499/0x184a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120804 data_alloc: 218103808 data_used: 3723264
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 32456704 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 32456704 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 32456704 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 32456704 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.164191246s of 12.281321526s, submitted: 15
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 32456704 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fa235000/0x0/0x4ffc00000, data 0x177d476/0x1849000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 133 ms_handle_reset con 0x55ab9b305400 session 0x55ab9d2e0780
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070698 data_alloc: 218103808 data_used: 3731456
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa9be000/0x0/0x4ffc00000, data 0xff3615/0x10be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa9be000/0x0/0x4ffc00000, data 0xff3615/0x10be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070698 data_alloc: 218103808 data_used: 3731456
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa9be000/0x0/0x4ffc00000, data 0xff3615/0x10be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa9be000/0x0/0x4ffc00000, data 0xff3615/0x10be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070698 data_alloc: 218103808 data_used: 3731456
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.456750870s of 12.589468956s, submitted: 33
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073496 data_alloc: 218103808 data_used: 3731456
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 32415744 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073496 data_alloc: 218103808 data_used: 3731456
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 32407552 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94724096 unmapped: 32399360 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94724096 unmapped: 32399360 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94724096 unmapped: 32399360 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94724096 unmapped: 32399360 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94724096 unmapped: 32399360 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94724096 unmapped: 32399360 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94724096 unmapped: 32399360 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94724096 unmapped: 32399360 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94724096 unmapped: 32399360 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94724096 unmapped: 32399360 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 32391168 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 32382976 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 32382976 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 32382976 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 7427 writes, 29K keys, 7427 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7427 writes, 1646 syncs, 4.51 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 729 writes, 2111 keys, 729 commit groups, 1.0 writes per commit group, ingest: 1.09 MB, 0.00 MB/s#012Interval WAL: 729 writes, 334 syncs, 2.18 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa9bc000/0x0/0x4ffc00000, data 0xff5078/0x10c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073976 data_alloc: 218103808 data_used: 3743744
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 32374784 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 161.326278687s of 161.347686768s, submitted: 14
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9ded3860
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 ms_handle_reset con 0x55ab9bfc4400 session 0x55ab9b41fc20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 ms_handle_reset con 0x55ab9d7f2800 session 0x55ab9d92f860
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 94756864 unmapped: 32366592 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 ms_handle_reset con 0x55ab9b303800 session 0x55ab9d7d0b40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 35037184 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb307000/0x0/0x4ffc00000, data 0x6ab078/0x777000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb307000/0x0/0x4ffc00000, data 0x6ab078/0x777000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 35037184 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb307000/0x0/0x4ffc00000, data 0x6ab078/0x777000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 35037184 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978790 data_alloc: 218103808 data_used: 393216
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 35037184 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 35037184 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 35037184 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb307000/0x0/0x4ffc00000, data 0x6ab078/0x777000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 35037184 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 35037184 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978790 data_alloc: 218103808 data_used: 393216
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 35037184 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 35037184 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb307000/0x0/0x4ffc00000, data 0x6ab078/0x777000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb307000/0x0/0x4ffc00000, data 0x6ab078/0x777000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978790 data_alloc: 218103808 data_used: 393216
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb307000/0x0/0x4ffc00000, data 0x6ab078/0x777000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978790 data_alloc: 218103808 data_used: 393216
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 ms_handle_reset con 0x55ab9ce08c00 session 0x55ab9cf46780
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 ms_handle_reset con 0x55ab9d158000 session 0x55ab9bba1c20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 ms_handle_reset con 0x55ab9b302400 session 0x55ab9e944960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.763299942s of 20.002590179s, submitted: 38
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8e7000/0x0/0x4ffc00000, data 0xcb078/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 ms_handle_reset con 0x55ab9d7f1400 session 0x55ab9d767680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 62.241371155s of 62.264389038s, submitted: 7
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92143616 unmapped: 34979840 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,1])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 34988032 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92176384 unmapped: 34947072 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb4dc000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb4dc000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb4dc000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb4dc000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb4dc000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.915590286s of 24.443050385s, submitted: 90
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92250112 unmapped: 34873344 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 135 ms_handle_reset con 0x55ab9a795c00 session 0x55ab9bd685a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 34856960 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd37068/0xe02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92299264 unmapped: 34824192 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026554 data_alloc: 218103808 data_used: 356352
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9cf8c000 session 0x55ab9da2d2c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92372992 unmapped: 34750464 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92372992 unmapped: 34750464 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063416 data_alloc: 218103808 data_used: 368640
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.379611969s of 47.708621979s, submitted: 41
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7ec400 session 0x55ab9e949680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 27566080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082460 data_alloc: 218103808 data_used: 7184384
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e8000 session 0x55ab9d7a85a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f3000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 103309312 unmapped: 23814144 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b305400 session 0x55ab9e9643c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9a795c00 session 0x55ab9bb941e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9cf8c000 session 0x55ab9bba0f00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e8000 session 0x55ab9bba1860
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7ec400 session 0x55ab9d92fa40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 23617536 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9f4e000/0x0/0x4ffc00000, data 0x164f7b8/0x1720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 23617536 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9f4e000/0x0/0x4ffc00000, data 0x164f7b8/0x1720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 23617536 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e7c00 session 0x55ab9c06e000
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9a795c00 session 0x55ab9d2ead20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9cf8c000 session 0x55ab9d7c74a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e7c00 session 0x55ab9bba10e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 23617536 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132742 data_alloc: 234881024 data_used: 11837440
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e8000 session 0x55ab9c01a1e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7ec400 session 0x55ab9bb983c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9a795c00 session 0x55ab9d2f2b40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9cf8c000 session 0x55ab9bb94d20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e7c00 session 0x55ab9a7294a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e8000 session 0x55ab9d938f00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104161280 unmapped: 22962176 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 22945792 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9350000/0x0/0x4ffc00000, data 0x224c7c8/0x231e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9bfc4c00 session 0x55ab9e9454a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9e945680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f4800 session 0x55ab9e945860
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9e945c20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 22372352 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0c00 session 0x55ab9e945e00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7ea800 session 0x55ab9b42a1e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9c07ba40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9d7663c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0c00 session 0x55ab9d767680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f4800 session 0x55ab9e9445a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b303000 session 0x55ab9e944960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9bba0f00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9bba1860
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e5f000/0x0/0x4ffc00000, data 0x273d7c8/0x280f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 22708224 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0c00 session 0x55ab9bba10e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f4800 session 0x55ab9bf95680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9c00 session 0x55ab9c166f00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9b41c960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9c01a1e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 21553152 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332327 data_alloc: 234881024 data_used: 12484608
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 21553152 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 106094592 unmapped: 21028864 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.299287796s of 12.889208794s, submitted: 68
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0800 session 0x55ab9d2ead20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b303800 session 0x55ab9e945860
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 106094592 unmapped: 21028864 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f2400 session 0x55ab9d7c1680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d159800 session 0x55ab9d2f3a40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b303800 session 0x55ab9d939c20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 106094592 unmapped: 21028864 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9da5ba40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9c07a000
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8ca4000/0x0/0x4ffc00000, data 0x28f684d/0x29ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0800 session 0x55ab9b41c000
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104284160 unmapped: 22839296 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1290427 data_alloc: 234881024 data_used: 11845632
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0800 session 0x55ab9d2ea3c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b303800 session 0x55ab9d7c61e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104644608 unmapped: 22478848 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9d446b40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a8000 session 0x55ab9d767a40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22462464 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22462464 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 22208512 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8c7a000/0x0/0x4ffc00000, data 0x292084d/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 15384576 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389010 data_alloc: 234881024 data_used: 23777280
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 11640832 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119603200 unmapped: 7520256 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 7503872 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9d7d1680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.269133568s of 11.341886520s, submitted: 15
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f3000 session 0x55ab9bd4ef00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d159800 session 0x55ab9e945e00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9000 session 0x55ab9d7a92c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 7503872 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b303800 session 0x55ab9d09dc20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f97ed000/0x0/0x4ffc00000, data 0x1dad7eb/0x1e80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a8000 session 0x55ab9d7663c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277304 data_alloc: 234881024 data_used: 19943424
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277304 data_alloc: 234881024 data_used: 19943424
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277304 data_alloc: 234881024 data_used: 19943424
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277304 data_alloc: 234881024 data_used: 19943424
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.212007523s of 18.411972046s, submitted: 33
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114434048 unmapped: 12689408 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9bfc7c00 session 0x55ab9d7c0b40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f4800 session 0x55ab9bf94b40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e6000 session 0x55ab9cf46780
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9cd3e400 session 0x55ab9da73c20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f5000 session 0x55ab9d92ed20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 16973824 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f92c8000/0x0/0x4ffc00000, data 0x22d37eb/0x23a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114376704 unmapped: 16949248 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 137 ms_handle_reset con 0x55ab9d7f5000 session 0x55ab9bd683c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114384896 unmapped: 16941056 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331373 data_alloc: 234881024 data_used: 19951616
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 137 ms_handle_reset con 0x55ab9bfc7c00 session 0x55ab9d7c1860
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 137 ms_handle_reset con 0x55ab9cd3e400 session 0x55ab9e944d20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114384896 unmapped: 16941056 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114384896 unmapped: 16941056 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 138 ms_handle_reset con 0x55ab9d7e6000 session 0x55ab9d7a85a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 16842752 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 16842752 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f92c0000/0x0/0x4ffc00000, data 0x22d6f39/0x23ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f92c0000/0x0/0x4ffc00000, data 0x22d6f39/0x23ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 16465920 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338125 data_alloc: 234881024 data_used: 19951616
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 138 ms_handle_reset con 0x55ab9d7f2c00 session 0x55ab9b4410e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f92c0000/0x0/0x4ffc00000, data 0x22d6f39/0x23ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 16457728 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 11526144 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 11526144 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.638536453s of 11.198720932s, submitted: 140
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 11460608 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 11419648 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419475 data_alloc: 234881024 data_used: 20987904
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 11370496 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 9920512 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 9920512 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 9904128 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452611 data_alloc: 234881024 data_used: 25559040
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452611 data_alloc: 234881024 data_used: 25559040
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 9846784 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 9846784 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121487360 unmapped: 9838592 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452611 data_alloc: 234881024 data_used: 25559040
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 9805824 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.850786209s of 18.968877792s, submitted: 30
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 12247040 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7e6c00 session 0x55ab9c01a1e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7e6000 session 0x55ab9d2ced20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9b303800 session 0x55ab9b138780
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d158400 session 0x55ab9d7c10e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9cf8c000 session 0x55ab9cf752c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1499789 data_alloc: 234881024 data_used: 25559040
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f83a0000/0x0/0x4ffc00000, data 0x31f69fe/0x32ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d158800 session 0x55ab9bb98b40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7e9000 session 0x55ab9b440960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7eb000 session 0x55ab9bb99680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7e9c00 session 0x55ab9da2c3c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 14819328 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501936 data_alloc: 234881024 data_used: 25559040
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f839e000/0x0/0x4ffc00000, data 0x31f6a31/0x32d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 14802944 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 14802944 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f839e000/0x0/0x4ffc00000, data 0x31f6a31/0x32d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 14024704 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 122355712 unmapped: 12124160 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.693739891s of 12.235367775s, submitted: 47
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 122355712 unmapped: 12124160 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1524484 data_alloc: 251658240 data_used: 28762112
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f839e000/0x0/0x4ffc00000, data 0x31f6a31/0x32d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 122388480 unmapped: 12091392 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 122388480 unmapped: 12091392 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123691008 unmapped: 10788864 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126533632 unmapped: 7946240 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7913000/0x0/0x4ffc00000, data 0x3c7ba31/0x3d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619524 data_alloc: 251658240 data_used: 29175808
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127737856 unmapped: 6742016 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7e4c00 session 0x55ab9d7a8f00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9cdaa000 session 0x55ab9e945c20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127787008 unmapped: 6692864 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d158800 session 0x55ab9d7a94a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 6520832 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 6520832 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128958464 unmapped: 5521408 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f785d000/0x0/0x4ffc00000, data 0x3d29a31/0x3e03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1631582 data_alloc: 251658240 data_used: 29663232
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129318912 unmapped: 5160960 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.974658012s of 11.352388382s, submitted: 112
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7846000/0x0/0x4ffc00000, data 0x3d4ea31/0x3e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7846000/0x0/0x4ffc00000, data 0x3d4ea31/0x3e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7846000/0x0/0x4ffc00000, data 0x3d4ea31/0x3e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1623038 data_alloc: 251658240 data_used: 29671424
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 5259264 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 5259264 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 5259264 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1623038 data_alloc: 251658240 data_used: 29671424
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 5259264 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7846000/0x0/0x4ffc00000, data 0x3d4ea31/0x3e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.003317833s of 12.024734497s, submitted: 4
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7843000/0x0/0x4ffc00000, data 0x3d51a31/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1622906 data_alloc: 251658240 data_used: 29671424
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7843000/0x0/0x4ffc00000, data 0x3d51a31/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1624471 data_alloc: 251658240 data_used: 29671424
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129236992 unmapped: 5242880 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7843000/0x0/0x4ffc00000, data 0x3d51a31/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129236992 unmapped: 5242880 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f76eb000/0x0/0x4ffc00000, data 0x3ea8a54/0x3f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 21381120 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.003591537s of 10.582101822s, submitted: 117
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 22061056 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e7800 session 0x55ab9e72d0e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 22061056 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1732557 data_alloc: 251658240 data_used: 29777920
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 22052864 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f6aa1000/0x0/0x4ffc00000, data 0x4aef5f4/0x4bcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 22044672 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 22044672 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 22044672 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f6a80000/0x0/0x4ffc00000, data 0x4b115f4/0x4bee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 22044672 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1733549 data_alloc: 251658240 data_used: 29798400
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 22044672 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 22650880 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 22585344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128983040 unmapped: 22282240 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.304215431s of 10.410350800s, submitted: 18
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 22151168 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f6a70000/0x0/0x4ffc00000, data 0x4b215f4/0x4bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d525000 session 0x55ab9bb94b40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc7800 session 0x55ab9bb941e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdaa000 session 0x55ab9e9441e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1741613 data_alloc: 251658240 data_used: 30396416
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 22151168 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9e9454a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129458176 unmapped: 21807104 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d525000 session 0x55ab9e944f00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132718592 unmapped: 18546688 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e9800 session 0x55ab9e944780
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e8400 session 0x55ab9e9445a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdaa000 session 0x55ab9bf94f00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9b41c960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d525000 session 0x55ab9c2bbc20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e9800 session 0x55ab9a7294a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132038656 unmapped: 19226624 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132038656 unmapped: 19226624 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f6352000/0x0/0x4ffc00000, data 0x523e604/0x531c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1809372 data_alloc: 251658240 data_used: 31444992
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132038656 unmapped: 19226624 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132038656 unmapped: 19226624 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132038656 unmapped: 19226624 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f634f000/0x0/0x4ffc00000, data 0x5241604/0x531f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1809548 data_alloc: 251658240 data_used: 31444992
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.554804802s of 12.779651642s, submitted: 38
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f634f000/0x0/0x4ffc00000, data 0x5241604/0x531f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc4800 session 0x55ab9d2c4b40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdaa000 session 0x55ab9d2c5860
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f634f000/0x0/0x4ffc00000, data 0x5241604/0x531f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9d2c45a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d525000 session 0x55ab9d2c5e00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1816052 data_alloc: 251658240 data_used: 31444992
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7f2c00 session 0x55ab9cf46d20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 19456000 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc7c00 session 0x55ab9c01a5a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f6324000/0x0/0x4ffc00000, data 0x526b627/0x534a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9bba1c20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 20799488 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f73fe000/0x0/0x4ffc00000, data 0x4191627/0x4270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 20799488 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131391488 unmapped: 19873792 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 17694720 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1726156 data_alloc: 251658240 data_used: 37060608
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 17694720 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 17694720 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 17694720 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.452157021s of 11.654810905s, submitted: 44
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f73fc000/0x0/0x4ffc00000, data 0x4192627/0x4271000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 17694720 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 17653760 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1726748 data_alloc: 251658240 data_used: 37068800
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 17653760 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7edc00 session 0x55ab9da73a40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7f6000 session 0x55ab9d2e1680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f73f1000/0x0/0x4ffc00000, data 0x419e627/0x427d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7f6400 session 0x55ab9b41d2c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 19185664 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 19185664 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8037000/0x0/0x4ffc00000, data 0x3559592/0x3635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 19152896 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8037000/0x0/0x4ffc00000, data 0x3559592/0x3635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cd46c00 session 0x55ab9d7a8960
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdabc00 session 0x55ab9d2e0f00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132145152 unmapped: 19120128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9d2e0b40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328035 data_alloc: 234881024 data_used: 19718144
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328035 data_alloc: 234881024 data_used: 19718144
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328035 data_alloc: 234881024 data_used: 19718144
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328035 data_alloc: 234881024 data_used: 19718144
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.466674805s of 25.778623581s, submitted: 69
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127623168 unmapped: 23642112 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399325 data_alloc: 234881024 data_used: 20193280
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bd2000/0x0/0x4ffc00000, data 0x29c255f/0x2a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bca000/0x0/0x4ffc00000, data 0x29ca55f/0x2aa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405631 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bca000/0x0/0x4ffc00000, data 0x29ca55f/0x2aa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bba000/0x0/0x4ffc00000, data 0x29da55f/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402495 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bba000/0x0/0x4ffc00000, data 0x29da55f/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bba000/0x0/0x4ffc00000, data 0x29da55f/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402495 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.735033035s of 20.114135742s, submitted: 83
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 24567808 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402651 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 24567808 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402651 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402651 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.520511627s of 12.538968086s, submitted: 3
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 24535040 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 24535040 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 24535040 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cf8c800 session 0x55ab9d7a9a40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a8400 session 0x55ab9c16c3c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e8800 session 0x55ab9c16d2c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9c16de00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.986988068s of 30.994596481s, submitted: 2
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdabc00 session 0x55ab9bf94d20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cf8c800 session 0x55ab9d92f680
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9d92ef00
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a8400 session 0x55ab9b1db4a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdabc00 session 0x55ab9bb985a0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1427586 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8924000/0x0/0x4ffc00000, data 0x2c6f56f/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8924000/0x0/0x4ffc00000, data 0x2c6f56f/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d524c00 session 0x55ab9d7c72c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430442 data_alloc: 234881024 data_used: 20193280
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 26230784 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 26230784 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 26230784 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 26230784 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 26230784 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125042688 unmapped: 26222592 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 26206208 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 26206208 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 26206208 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 26206208 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 26198016 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 26189824 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 26189824 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 26189824 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 26189824 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.284614563s of 43.411251068s, submitted: 16
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516584 data_alloc: 234881024 data_used: 23195648
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 23126016 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80e2000/0x0/0x4ffc00000, data 0x34b156f/0x358c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128163840 unmapped: 23101440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80bb000/0x0/0x4ffc00000, data 0x34d856f/0x35b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526666 data_alloc: 234881024 data_used: 23326720
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80bb000/0x0/0x4ffc00000, data 0x34d856f/0x35b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80bb000/0x0/0x4ffc00000, data 0x34d856f/0x35b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b8000/0x0/0x4ffc00000, data 0x34db56f/0x35b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525850 data_alloc: 234881024 data_used: 23326720
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.226646423s of 13.443584442s, submitted: 62
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b8000/0x0/0x4ffc00000, data 0x34db56f/0x35b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526078 data_alloc: 234881024 data_used: 23326720
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x34dc56f/0x35b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525550 data_alloc: 234881024 data_used: 23326720
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x34dc56f/0x35b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x34dc56f/0x35b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cd45c00 session 0x55ab9d766b40
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9d766000
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b302000 session 0x55ab9d7672c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b302000 session 0x55ab9da721e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.944172859s of 11.963119507s, submitted: 2
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cd45c00 session 0x55ab9d2ce000
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdabc00 session 0x55ab9d7c70e0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9d446d20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1547706 data_alloc: 234881024 data_used: 23326720
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d524c00 session 0x55ab9d766d20
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d524c00 session 0x55ab9d7a8780
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x372e56f/0x3809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1547706 data_alloc: 234881024 data_used: 23326720
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x372e56f/0x3809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 24600576 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d525000 session 0x55ab9da732c0
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 24600576 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 24600576 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 24567808 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1555406 data_alloc: 234881024 data_used: 23949312
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 24567808 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127672320 unmapped: 23592960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127672320 unmapped: 23592960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127672320 unmapped: 23592960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127672320 unmapped: 23592960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127688704 unmapped: 23576576 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127688704 unmapped: 23576576 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.070636749s of 44.145889282s, submitted: 9
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131506176 unmapped: 19759104 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:32 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:32 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1653670 data_alloc: 234881024 data_used: 26386432
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131547136 unmapped: 19718144 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f738d000/0x0/0x4ffc00000, data 0x420656f/0x42e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f730f000/0x0/0x4ffc00000, data 0x428456f/0x435f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:32 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661948 data_alloc: 234881024 data_used: 26673152
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 20013056 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f72ef000/0x0/0x4ffc00000, data 0x42a456f/0x437f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 20013056 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f72ef000/0x0/0x4ffc00000, data 0x42a456f/0x437f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 20013056 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a8800 session 0x55ab9b41f860
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7f7800 session 0x55ab9d7a83c0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.831089020s of 11.189863205s, submitted: 67
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1539642 data_alloc: 234881024 data_used: 23437312
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7ee400 session 0x55ab9c01b2c0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 23388160 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x34dc56f/0x35b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 23388160 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 23388160 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 23388160 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x34dc56f/0x35b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 23388160 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e7400 session 0x55ab9d7c6b40
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b76f400 session 0x55ab9bb94b40
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533102 data_alloc: 234881024 data_used: 23326720
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 23371776 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d159000 session 0x55ab9d7c01e0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.699962616s of 49.828170776s, submitted: 34
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 29736960 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7ed400 session 0x55ab9da73a40
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc2400 session 0x55ab9da723c0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b76f400 session 0x55ab9da730e0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc2400 session 0x55ab9d92f680
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d159000 session 0x55ab9d92fc20
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1478296 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 29736960 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 29736960 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 29736960 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e4c00 session 0x55ab9d92e960
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 29736960 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7f4000 session 0x55ab9d92e3c0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 29720576 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b76f400 session 0x55ab9d92f4a0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc2400 session 0x55ab9d92e000
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1478296 data_alloc: 234881024 data_used: 20189184
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 29720576 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 29720576 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 29679616 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128827392 unmapped: 28753920 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 9455 writes, 37K keys, 9455 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9455 writes, 2473 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2028 writes, 7848 keys, 2028 commit groups, 1.0 writes per commit group, ingest: 8.51 MB, 0.01 MB/s#012Interval WAL: 2028 writes, 827 syncs, 2.45 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130121728 unmapped: 27459584 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: mgrc ms_handle_reset ms_handle_reset con 0x55ab9ce08400
Dec  3 19:16:33 compute-0 ceph-osd[208881]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/817799961
Dec  3 19:16:33 compute-0 ceph-osd[208881]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/817799961,v1:192.168.122.100:6801/817799961]
Dec  3 19:16:33 compute-0 ceph-osd[208881]: mgrc handle_mgr_configure stats_period=5
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 27262976 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 27230208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 27230208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 27230208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.604251862s of 40.679466248s, submitted: 12
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1602660 data_alloc: 251658240 data_used: 27291648
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 22503424 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 22503424 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7bb0000/0x0/0x4ffc00000, data 0x39d655f/0x3ab0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1611264 data_alloc: 251658240 data_used: 28352512
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b89000/0x0/0x4ffc00000, data 0x3a0355f/0x3add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1605304 data_alloc: 251658240 data_used: 28352512
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1605304 data_alloc: 251658240 data_used: 28352512
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1605304 data_alloc: 251658240 data_used: 28352512
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.374584198s of 23.755279541s, submitted: 109
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1606392 data_alloc: 251658240 data_used: 28688384
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607012 data_alloc: 251658240 data_used: 28688384
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607188 data_alloc: 251658240 data_used: 28688384
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.286112785s of 14.325413704s, submitted: 6
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607508 data_alloc: 251658240 data_used: 28696576
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132997120 unmapped: 24584192 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1608276 data_alloc: 251658240 data_used: 28696576
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607752 data_alloc: 251658240 data_used: 28696576
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b63000/0x0/0x4ffc00000, data 0x3a3155f/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b63000/0x0/0x4ffc00000, data 0x3a3155f/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b63000/0x0/0x4ffc00000, data 0x3a3155f/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607752 data_alloc: 251658240 data_used: 28696576
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b63000/0x0/0x4ffc00000, data 0x3a3155f/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 24494080 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607752 data_alloc: 251658240 data_used: 28696576
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 24494080 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 24494080 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b305c00 session 0x55ab9bba12c0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b63000/0x0/0x4ffc00000, data 0x3a3155f/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 24494080 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.582841873s of 25.602600098s, submitted: 3
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609996 data_alloc: 251658240 data_used: 28696576
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 24453120 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 24453120 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 24444928 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 24379392 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.599219322s of 10.092069626s, submitted: 112
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133349376 unmapped: 24231936 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133349376 unmapped: 24231936 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 24215552 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 24215552 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 24215552 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 24215552 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 24190976 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 24190976 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610268 data_alloc: 234881024 data_used: 28827648
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 234.357208252s of 234.392181396s, submitted: 8
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b60000/0x0/0x4ffc00000, data 0x3a3455f/0x3b0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610452 data_alloc: 234881024 data_used: 28827648
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b60000/0x0/0x4ffc00000, data 0x3a3455f/0x3b0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610452 data_alloc: 234881024 data_used: 28827648
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b60000/0x0/0x4ffc00000, data 0x3a3455f/0x3b0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610612 data_alloc: 234881024 data_used: 28831744
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b60000/0x0/0x4ffc00000, data 0x3a3455f/0x3b0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.863843918s of 15.872858047s, submitted: 1
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610856 data_alloc: 234881024 data_used: 28831744
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610856 data_alloc: 234881024 data_used: 28831744
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 24150016 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 24150016 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610856 data_alloc: 234881024 data_used: 28831744
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 24150016 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.326219559s of 14.335576057s, submitted: 1
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 24125440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 24125440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613304 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613304 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613304 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.338993073s of 15.360601425s, submitted: 14
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133529600 unmapped: 24051712 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133529600 unmapped: 24051712 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133529600 unmapped: 24051712 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133529600 unmapped: 24051712 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133529600 unmapped: 24051712 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 24043520 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 24043520 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 24043520 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 24043520 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133554176 unmapped: 24027136 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133562368 unmapped: 24018944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133562368 unmapped: 24018944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133562368 unmapped: 24018944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133562368 unmapped: 24018944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133562368 unmapped: 24018944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 23969792 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 23969792 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 23969792 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 23969792 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 23961600 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 23961600 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 23961600 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 202.922653198s of 202.928924561s, submitted: 1
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e9800 session 0x55ab9e945e00
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdaa000 session 0x55ab9d7a81e0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7eb400 session 0x55ab9d7c14a0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429274 data_alloc: 218103808 data_used: 20819968
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29dc52c/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29dc52c/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429274 data_alloc: 218103808 data_used: 20819968
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29dc52c/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429274 data_alloc: 218103808 data_used: 20819968
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29dc52c/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429274 data_alloc: 218103808 data_used: 20819968
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b14d400 session 0x55ab9d446b40
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9ce08c00 session 0x55ab9d7c0780
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.876663208s of 19.201396942s, submitted: 51
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29dc52c/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7eb000 session 0x55ab9e944000
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2709 syncs, 3.69 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 551 writes, 1719 keys, 551 commit groups, 1.0 writes per commit group, ingest: 1.95 MB, 0.00 MB/s#012Interval WAL: 551 writes, 236 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252524 data_alloc: 218103808 data_used: 12570624
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252524 data_alloc: 218103808 data_used: 12570624
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.773575783s of 13.811752319s, submitted: 9
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251898 data_alloc: 218103808 data_used: 12566528
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 141 ms_handle_reset con 0x55ab9db14000 session 0x55ab9c16c3c0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118235136 unmapped: 39346176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118243328 unmapped: 47734784 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 142 ms_handle_reset con 0x55ab9d1a8800 session 0x55ab9d7a92c0
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118267904 unmapped: 47710208 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9bdf000/0x0/0x4ffc00000, data 0x19b4c73/0x1a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118267904 unmapped: 47710208 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241395 data_alloc: 218103808 data_used: 4730880
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 143 ms_handle_reset con 0x55ab9d7f3000 session 0x55ab9e944b40
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 47677440 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 47677440 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9fc9000/0x0/0x4ffc00000, data 0x11b82b0/0x1293000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190873 data_alloc: 218103808 data_used: 4730880
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.670616150s of 14.250964165s, submitted: 98
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193175 data_alloc: 218103808 data_used: 4730880
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193175 data_alloc: 218103808 data_used: 4730880
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193175 data_alloc: 218103808 data_used: 4730880
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 66.491020203s of 66.518074036s, submitted: 14
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118423552 unmapped: 47554560 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118480896 unmapped: 47497216 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:33 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:33 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118611968 unmapped: 47366144 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: do_command 'config diff' '{prefix=config diff}'
Dec  3 19:16:33 compute-0 ceph-osd[208881]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  3 19:16:33 compute-0 ceph-osd[208881]: do_command 'config show' '{prefix=config show}'
Dec  3 19:16:33 compute-0 ceph-osd[208881]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  3 19:16:33 compute-0 ceph-osd[208881]: do_command 'counter dump' '{prefix=counter dump}'
Dec  3 19:16:33 compute-0 ceph-osd[208881]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  3 19:16:33 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:16:33 compute-0 ceph-osd[208881]: do_command 'counter schema' '{prefix=counter schema}'
Dec  3 19:16:33 compute-0 ceph-osd[208881]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118489088 unmapped: 47489024 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118546432 unmapped: 47431680 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:33 compute-0 ceph-osd[208881]: do_command 'log dump' '{prefix=log dump}'
Dec  3 19:16:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  3 19:16:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649085600' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  3 19:16:33 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 19:16:33 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15607 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:33 compute-0 nova_compute[348325]: 2025-12-03 19:16:33.561 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  3 19:16:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347663657' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  3 19:16:33 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15611 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:33 compute-0 podman[474195]: 2025-12-03 19:16:33.929889368 +0000 UTC m=+0.098636546 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 19:16:33 compute-0 podman[474194]: 2025-12-03 19:16:33.964716296 +0000 UTC m=+0.136223170 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  3 19:16:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  3 19:16:34 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/295062704' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  3 19:16:34 compute-0 nova_compute[348325]: 2025-12-03 19:16:34.035 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:16:34 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15615 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:16:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 19:16:34 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886580223' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 19:16:34 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15619 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:16:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:16:34 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  3 19:16:34 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2202861365' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  3 19:16:35 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15623 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 19:16:35 compute-0 nova_compute[348325]: 2025-12-03 19:16:35.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:16:35 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec  3 19:16:35 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821326658' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  3 19:16:35 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15627 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 19:16:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:16:36 compute-0 nova_compute[348325]: 2025-12-03 19:16:36.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:16:36 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15633 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 19:16:36 compute-0 ceph-mgr[193091]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 19:16:36 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T19:16:36.540+0000 7ff6bdbb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 19:16:36 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec  3 19:16:36 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985561350' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  3 19:16:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec  3 19:16:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4196971899' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  3 19:16:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec  3 19:16:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125451275' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  3 19:16:37 compute-0 nova_compute[348325]: 2025-12-03 19:16:37.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:16:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec  3 19:16:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3870297067' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  3 19:16:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec  3 19:16:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/153155190' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  3 19:16:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:16:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4060003694' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:16:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:16:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4060003694' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:16:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec  3 19:16:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120327345' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  3 19:16:38 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec  3 19:16:38 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2429132234' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 1867776 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 1867776 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 1867776 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 1867776 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8e91000/0x0/0x4ffc00000, data 0x2b221f4/0x2be6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1423782 data_alloc: 251658240 data_used: 29110272
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 1867776 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 1867776 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8e91000/0x0/0x4ffc00000, data 0x2b221f4/0x2be6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 1867776 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 1867776 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 1867776 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1423782 data_alloc: 251658240 data_used: 29110272
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 1859584 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 1859584 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 1859584 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8e91000/0x0/0x4ffc00000, data 0x2b221f4/0x2be6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.654956818s of 49.679458618s, submitted: 8
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 1851392 heap: 118579200 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562fa58d400 session 0x5562fa72a960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562facec400 session 0x5562facff2c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f72c0000 session 0x5562fa0241e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f72c0800 session 0x5562fa2ded20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f72c1400 session 0x5562f88045a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 8667136 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562fa58d400 session 0x5562f9fb1c20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f8926000 session 0x5562f8eacf00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f72c0000 session 0x5562f8cafa40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f72c0800 session 0x5562f9191e00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 7743 writes, 30K keys, 7743 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7743 writes, 1643 syncs, 4.71 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 787 writes, 2642 keys, 787 commit groups, 1.0 writes per commit group, ingest: 2.91 MB, 0.00 MB/s#012Interval WAL: 787 writes, 315 syncs, 2.50 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457046 data_alloc: 251658240 data_used: 29110272
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 8658944 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b4f000/0x0/0x4ffc00000, data 0x2e682c8/0x2f2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 8642560 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 8642560 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 8642560 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b4f000/0x0/0x4ffc00000, data 0x2e682c8/0x2f2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 8609792 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f72c1400 session 0x5562fa4c3680
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457046 data_alloc: 251658240 data_used: 29110272
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562fa58d400 session 0x5562facf70e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 8609792 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f99e4000 session 0x5562fa5645a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f72c0000 session 0x5562facf61e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 8077312 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b4f000/0x0/0x4ffc00000, data 0x2e682c8/0x2f2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 8069120 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 8069120 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 8069120 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1487585 data_alloc: 251658240 data_used: 32530432
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 5521408 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 5521408 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b12000/0x0/0x4ffc00000, data 0x2ea42d8/0x2f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 5521408 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 5521408 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 5521408 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b12000/0x0/0x4ffc00000, data 0x2ea42d8/0x2f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1487585 data_alloc: 251658240 data_used: 32530432
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 5488640 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 5488640 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b12000/0x0/0x4ffc00000, data 0x2ea42d8/0x2f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 5488640 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 5488640 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 5488640 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1487585 data_alloc: 251658240 data_used: 32530432
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 5488640 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 5488640 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 5488640 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b12000/0x0/0x4ffc00000, data 0x2ea42d8/0x2f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 5480448 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 5480448 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1487585 data_alloc: 251658240 data_used: 32530432
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 5480448 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 5480448 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b12000/0x0/0x4ffc00000, data 0x2ea42d8/0x2f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 5472256 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 5472256 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 5472256 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b12000/0x0/0x4ffc00000, data 0x2ea42d8/0x2f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1487585 data_alloc: 251658240 data_used: 32530432
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 5472256 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b12000/0x0/0x4ffc00000, data 0x2ea42d8/0x2f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 5472256 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 5472256 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 5472256 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b12000/0x0/0x4ffc00000, data 0x2ea42d8/0x2f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 5472256 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1487585 data_alloc: 251658240 data_used: 32530432
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 5472256 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 5472256 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 5472256 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b12000/0x0/0x4ffc00000, data 0x2ea42d8/0x2f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 5439488 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b12000/0x0/0x4ffc00000, data 0x2ea42d8/0x2f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 5439488 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1487585 data_alloc: 251658240 data_used: 32530432
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 5439488 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 5439488 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8b12000/0x0/0x4ffc00000, data 0x2ea42d8/0x2f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 5439488 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 5439488 heap: 125394944 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.742790222s of 45.973087311s, submitted: 37
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 124944384 unmapped: 1499136 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1527911 data_alloc: 251658240 data_used: 32751616
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121020416 unmapped: 5423104 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121200640 unmapped: 5242880 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1537147 data_alloc: 251658240 data_used: 32755712
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1537147 data_alloc: 251658240 data_used: 32755712
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1537147 data_alloc: 251658240 data_used: 32755712
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1537147 data_alloc: 251658240 data_used: 32755712
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1537147 data_alloc: 251658240 data_used: 32755712
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1537147 data_alloc: 251658240 data_used: 32755712
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1537147 data_alloc: 251658240 data_used: 32755712
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1537147 data_alloc: 251658240 data_used: 32755712
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1537147 data_alloc: 251658240 data_used: 32755712
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1537147 data_alloc: 251658240 data_used: 32755712
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.651905060s of 54.831211090s, submitted: 42
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 5283840 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [0,1])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121217024 unmapped: 5226496 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 5185536 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 5177344 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 5169152 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 5169152 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 5169152 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 5169152 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 5169152 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 5169152 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 5169152 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 5169152 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 5169152 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 5160960 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 5160960 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 5160960 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 5160960 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 5160960 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 5152768 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 5152768 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 5152768 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 5152768 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 5152768 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 5152768 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 5152768 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 5152768 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 5152768 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 5152768 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 5152768 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 5144576 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 5136384 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 5128192 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 5128192 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 5128192 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 5128192 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 5128192 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 5128192 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 5128192 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 5128192 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 5128192 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 5128192 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1535659 data_alloc: 251658240 data_used: 32759808
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 5120000 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f8c76000 session 0x5562fa2de780
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 126.250846863s of 126.844032288s, submitted: 90
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f8c76400 session 0x5562f8792960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f8c76800 session 0x5562f72b7c20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8285000/0x0/0x4ffc00000, data 0x33212d8/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,2])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1470747 data_alloc: 251658240 data_used: 29425664
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 6406144 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f9acfc00 session 0x5562fa2df680
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 251658240 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 5324800 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 5308416 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 5308416 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 5308416 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 5308416 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 5308416 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 5308416 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 5308416 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f9acdc00 session 0x5562facbaf00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121143296 unmapped: 5300224 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: mgrc ms_handle_reset ms_handle_reset con 0x5562f7d04400
Dec  3 19:16:38 compute-0 ceph-osd[207851]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/817799961
Dec  3 19:16:38 compute-0 ceph-osd[207851]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/817799961,v1:192.168.122.100:6801/817799961]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: mgrc handle_mgr_configure stats_period=5
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f9ad3400 session 0x5562facb65a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f7f96400 session 0x5562facbb0e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1457905 data_alloc: 234881024 data_used: 29208576
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882a000/0x0/0x4ffc00000, data 0x2d7c2d8/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 5210112 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f7fd1800 session 0x5562f72b6780
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f8c77400 session 0x5562f9fb7860
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 120.027465820s of 120.593963623s, submitted: 53
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f8c77800 session 0x5562f7e0da40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345855 data_alloc: 218103808 data_used: 22319104
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 10289152 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 ms_handle_reset con 0x5562f8c76800 session 0x5562f88bdc20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9074000/0x0/0x4ffc00000, data 0x25332c8/0x25fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 10682368 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 10682368 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 10682368 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9074000/0x0/0x4ffc00000, data 0x25332c8/0x25fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 10682368 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333473 data_alloc: 218103808 data_used: 22102016
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 10682368 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 10682368 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 10682368 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 10682368 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 10682368 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9074000/0x0/0x4ffc00000, data 0x25332c8/0x25fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333473 data_alloc: 218103808 data_used: 22102016
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 10682368 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 10682368 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9074000/0x0/0x4ffc00000, data 0x25332c8/0x25fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333473 data_alloc: 218103808 data_used: 22102016
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9074000/0x0/0x4ffc00000, data 0x25332c8/0x25fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333473 data_alloc: 218103808 data_used: 22102016
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9074000/0x0/0x4ffc00000, data 0x25332c8/0x25fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333473 data_alloc: 218103808 data_used: 22102016
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9074000/0x0/0x4ffc00000, data 0x25332c8/0x25fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9074000/0x0/0x4ffc00000, data 0x25332c8/0x25fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9074000/0x0/0x4ffc00000, data 0x25332c8/0x25fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333473 data_alloc: 218103808 data_used: 22102016
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333633 data_alloc: 218103808 data_used: 22106112
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9074000/0x0/0x4ffc00000, data 0x25332c8/0x25fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9074000/0x0/0x4ffc00000, data 0x25332c8/0x25fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.412616730s of 36.639865875s, submitted: 40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562f9ad2000 session 0x5562f72be1e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9070000/0x0/0x4ffc00000, data 0x2534e45/0x25fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337807 data_alloc: 218103808 data_used: 22114304
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9070000/0x0/0x4ffc00000, data 0x2534e45/0x25fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562f7fd1800 session 0x5562f8c88d20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562f8c76800 session 0x5562fabd6960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562facf2400 session 0x5562f8ec01e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337807 data_alloc: 218103808 data_used: 22114304
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 10674176 heap: 126443520 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562facf2400 session 0x5562fa4ccb40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562f8c77800 session 0x5562f79ef4a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562f8c77c00 session 0x5562fac590e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562f7fd1800 session 0x5562fa72ab40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.406433105s of 10.519741058s, submitted: 16
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 23027712 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562f8c76800 session 0x5562facbba40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 23027712 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 23027712 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562f8c77800 session 0x5562f9a845a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 23068672 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f88f4000/0x0/0x4ffc00000, data 0x2cb0e68/0x2d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402958 data_alloc: 218103808 data_used: 22122496
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 23052288 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 23052288 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115277824 unmapped: 23232512 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 20807680 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f88f4000/0x0/0x4ffc00000, data 0x2cb0e68/0x2d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f88f4000/0x0/0x4ffc00000, data 0x2cb0e68/0x2d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451278 data_alloc: 234881024 data_used: 28835840
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f88f4000/0x0/0x4ffc00000, data 0x2cb0e68/0x2d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f88f4000/0x0/0x4ffc00000, data 0x2cb0e68/0x2d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f88f4000/0x0/0x4ffc00000, data 0x2cb0e68/0x2d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451278 data_alloc: 234881024 data_used: 28835840
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f88f4000/0x0/0x4ffc00000, data 0x2cb0e68/0x2d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f88f4000/0x0/0x4ffc00000, data 0x2cb0e68/0x2d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f88f4000/0x0/0x4ffc00000, data 0x2cb0e68/0x2d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451278 data_alloc: 234881024 data_used: 28835840
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562f8c77c00 session 0x5562f8cafc20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562facf2400 session 0x5562f88bc5a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562fd490400 session 0x5562f87ee780
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.110158920s of 22.159778595s, submitted: 9
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 20643840 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 ms_handle_reset con 0x5562f7fd1800 session 0x5562f88e8960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 23830528 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9070000/0x0/0x4ffc00000, data 0x2534e45/0x25fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344770 data_alloc: 218103808 data_used: 22114304
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 23830528 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 23830528 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 130 ms_handle_reset con 0x5562f8c76800 session 0x5562f9fb03c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 23822336 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 23822336 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 23822336 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f906d000/0x0/0x4ffc00000, data 0x2536a16/0x2600000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 130 ms_handle_reset con 0x5562f99e4c00 session 0x5562fa3d9e00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348768 data_alloc: 218103808 data_used: 22122496
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 23822336 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 23822336 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 23822336 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 23822336 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.767836571s of 10.969620705s, submitted: 45
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 23814144 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351742 data_alloc: 218103808 data_used: 22122496
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f906a000/0x0/0x4ffc00000, data 0x2538479/0x2603000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 23814144 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 23814144 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 23814144 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 23797760 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 23797760 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352466 data_alloc: 218103808 data_used: 22122496
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 23797760 heap: 138510336 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f886b000/0x0/0x4ffc00000, data 0x2d38479/0x2e03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 32186368 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 32186368 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32169984 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 132 ms_handle_reset con 0x5562f99e4c00 session 0x5562f87934a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 32153600 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411377 data_alloc: 218103808 data_used: 22130688
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f8867000/0x0/0x4ffc00000, data 0x2d39ff6/0x2e06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 32153600 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 32153600 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 32153600 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 32153600 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f8867000/0x0/0x4ffc00000, data 0x2d39ff6/0x2e06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 32153600 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f8867000/0x0/0x4ffc00000, data 0x2d39ff6/0x2e06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411377 data_alloc: 218103808 data_used: 22130688
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 32153600 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.176313400s of 16.301218033s, submitted: 27
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 133 ms_handle_reset con 0x5562f8c77c00 session 0x5562f7d465a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9064000/0x0/0x4ffc00000, data 0x253bbc7/0x2609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9064000/0x0/0x4ffc00000, data 0x253bbc7/0x2609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360505 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9064000/0x0/0x4ffc00000, data 0x253bbc7/0x2609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9064000/0x0/0x4ffc00000, data 0x253bbc7/0x2609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360505 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 32620544 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9064000/0x0/0x4ffc00000, data 0x253bbc7/0x2609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.182599068s of 13.344293594s, submitted: 39
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 32645120 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 8440 writes, 32K keys, 8440 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8440 writes, 1958 syncs, 4.31 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 697 writes, 1877 keys, 697 commit groups, 1.0 writes per commit group, ingest: 1.00 MB, 0.00 MB/s#012Interval WAL: 697 writes, 315 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9061000/0x0/0x4ffc00000, data 0x253d62a/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363479 data_alloc: 218103808 data_used: 22138880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 32636928 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 160.615066528s of 160.637496948s, submitted: 13
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 ms_handle_reset con 0x5562fa58d400 session 0x5562fa3d8b40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 ms_handle_reset con 0x5562f72c0800 session 0x5562fa024d20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 ms_handle_reset con 0x5562f72c1400 session 0x5562f7e9a780
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 32653312 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 34988032 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 ms_handle_reset con 0x5562f7fd1800 session 0x5562f8c88000
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f985e000/0x0/0x4ffc00000, data 0x1d425b8/0x1e0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263605 data_alloc: 218103808 data_used: 18489344
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d3e546/0x1e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263605 data_alloc: 218103808 data_used: 18489344
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d3e546/0x1e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263605 data_alloc: 218103808 data_used: 18489344
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d3e546/0x1e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d3e546/0x1e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d3e546/0x1e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263605 data_alloc: 218103808 data_used: 18489344
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d3e546/0x1e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 34979840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.211612701s of 19.519193649s, submitted: 62
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 ms_handle_reset con 0x5562f7f96c00 session 0x5562f8ec1e00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 ms_handle_reset con 0x5562facf2000 session 0x5562facbb860
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 ms_handle_reset con 0x5562f8c77800 session 0x5562f8eac3c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102629376 unmapped: 44277760 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 ms_handle_reset con 0x5562f72c0800 session 0x5562fa2dfa40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face3000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 62.564529419s of 62.779438019s, submitted: 52
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 43147264 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 43147264 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103743488 unmapped: 43163648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 43098112 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.952238083s of 24.430356979s, submitted: 90
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112263168 unmapped: 34643968 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086156 data_alloc: 218103808 data_used: 7090176
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 43016192 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa4e3000/0x0/0x4ffc00000, data 0x10c24d4/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 135 ms_handle_reset con 0x5562f72c1400 session 0x5562f88d6960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 43008000 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103915520 unmapped: 42991616 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f7fd1800 session 0x5562f9b0be00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103915520 unmapped: 42991616 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562f9fb05a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562f9fb0d20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f8c77800 session 0x5562f9fb0b40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103948288 unmapped: 42958848 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562f9322d20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fa58d400 session 0x5562f93230e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35127296 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562f811e3c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.107105255s of 49.211765289s, submitted: 6
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562f9190f00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 35463168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f8c77800 session 0x5562fa4c25a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562fa4c21e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a56800 session 0x5562fa4c32c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562fa4c2f00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9750000/0x0/0x4ffc00000, data 0x1e51bde/0x1f1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 35463168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215068 data_alloc: 218103808 data_used: 13914112
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562fa4c23c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 35463168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f8c77800 session 0x5562fa4c2000
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 35463168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9750000/0x0/0x4ffc00000, data 0x1e51bde/0x1f1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562fa4c30e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a56400 session 0x5562fa4c2780
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df800 session 0x5562f87ef4a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112304128 unmapped: 34603008 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112304128 unmapped: 34603008 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df000 session 0x5562fa38fc20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562fa587e00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562fa587c20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112304128 unmapped: 34603008 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f8c76800 session 0x5562fa5870e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562fa586000
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df000 session 0x5562f7e9a780
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562f9323680
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326211 data_alloc: 218103808 data_used: 13918208
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df800 session 0x5562f8937c20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f8c77800 session 0x5562fa025e00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 34217984 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562f72bed20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 32415744 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8500000/0x0/0x4ffc00000, data 0x309fbfd/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8500000/0x0/0x4ffc00000, data 0x309fbfd/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114532352 unmapped: 32374784 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df000 session 0x5562f9a5af00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562f9fafa40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114597888 unmapped: 32309248 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8500000/0x0/0x4ffc00000, data 0x309fbfd/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df800 session 0x5562f9190f00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.823292732s of 11.363385201s, submitted: 60
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 31481856 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562f811e3c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562f9fb05a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420580 data_alloc: 234881024 data_used: 19738624
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 31481856 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a56400 session 0x5562fa38fa40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562fa5b10e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562f9b0be00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 33931264 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df800 session 0x5562f88d6960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562f7e9a5a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a56400 session 0x5562f8c86b40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562f89372c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562f9322960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 34521088 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8a8c000/0x0/0x4ffc00000, data 0x2b13bfe/0x2be2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a39400 session 0x5562facf6960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 34521088 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562fa4c2d20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 34521088 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342282 data_alloc: 218103808 data_used: 14016512
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 34586624 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 34586624 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8a8b000/0x0/0x4ffc00000, data 0x2b13c0e/0x2be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 33595392 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8a8b000/0x0/0x4ffc00000, data 0x2b13c0e/0x2be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 29278208 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8a8b000/0x0/0x4ffc00000, data 0x2b13c0e/0x2be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121487360 unmapped: 25419776 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458282 data_alloc: 234881024 data_used: 30162944
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121487360 unmapped: 25419776 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8a8b000/0x0/0x4ffc00000, data 0x2b13c0e/0x2be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.190856934s of 11.338185310s, submitted: 32
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a38400 session 0x5562fa4c2f00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562f8c863c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a39800 session 0x5562f7e0cd20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a56400 session 0x5562f88ec5a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 30924800 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562fa38fc20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a38400 session 0x5562f8c87860
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9afd000/0x0/0x4ffc00000, data 0x1aa4bde/0x1b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218056 data_alloc: 218103808 data_used: 14536704
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9afd000/0x0/0x4ffc00000, data 0x1aa4bde/0x1b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218056 data_alloc: 218103808 data_used: 14536704
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9afd000/0x0/0x4ffc00000, data 0x1aa4bde/0x1b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218056 data_alloc: 218103808 data_used: 14536704
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 30867456 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a39800 session 0x5562f9faf680
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562fac96b40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562f9faef00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9afd000/0x0/0x4ffc00000, data 0x1aa4bde/0x1b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562f9a86960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.093891144s of 18.368560791s, submitted: 65
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a38400 session 0x5562fa565c20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a39800 session 0x5562f9fb1860
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 31842304 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250489 data_alloc: 218103808 data_used: 14536704
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 31842304 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 137 ms_handle_reset con 0x5562facf2000 session 0x5562f8c892c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 31760384 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 137 ms_handle_reset con 0x5562f9a39000 session 0x5562facff4a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 137 ms_handle_reset con 0x5562f9a39000 session 0x5562f8937c20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115154944 unmapped: 31752192 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f970e000/0x0/0x4ffc00000, data 0x1e8ebe0/0x1f5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 31817728 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 31793152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 138 ms_handle_reset con 0x5562f72c0800 session 0x5562fbef7680
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f970d000/0x0/0x4ffc00000, data 0x1e9038e/0x1f60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259892 data_alloc: 218103808 data_used: 14553088
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 138 ms_handle_reset con 0x5562f9a38400 session 0x5562fa5645a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 31784960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f970d000/0x0/0x4ffc00000, data 0x1e9038e/0x1f60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 138 ms_handle_reset con 0x5562f9a39800 session 0x5562fa5654a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 31776768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 138 ms_handle_reset con 0x5562facf2000 session 0x5562f9fb0960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 138 ms_handle_reset con 0x5562f72c0800 session 0x5562f9fb03c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 31727616 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f970c000/0x0/0x4ffc00000, data 0x1e903c1/0x1f62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 31727616 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 31055872 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f91ea000/0x0/0x4ffc00000, data 0x23b23c1/0x2484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.653357506s of 11.279190063s, submitted: 111
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304023 data_alloc: 218103808 data_used: 14639104
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115499008 unmapped: 31408128 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 30818304 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 30818304 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 30818304 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9159000/0x0/0x4ffc00000, data 0x243fe24/0x2513000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 30679040 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347987 data_alloc: 234881024 data_used: 18644992
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 30679040 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 30679040 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 31375360 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 31375360 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f913f000/0x0/0x4ffc00000, data 0x245be24/0x252f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 31375360 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345443 data_alloc: 234881024 data_used: 18644992
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 31367168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 31367168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 31367168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.370025635s of 12.496864319s, submitted: 34
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115703808 unmapped: 31203328 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115712000 unmapped: 31195136 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345691 data_alloc: 234881024 data_used: 18644992
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2465e24/0x2539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115712000 unmapped: 31195136 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115712000 unmapped: 31195136 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115712000 unmapped: 31195136 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115712000 unmapped: 31195136 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a39800 session 0x5562f9322960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a38c00 session 0x5562f93223c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a38800 session 0x5562f9322d20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f8115400 session 0x5562f93230e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f72c0800 session 0x5562f7e0c000
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a38800 session 0x5562f7d470e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a38c00 session 0x5562f8c89680
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a39800 session 0x5562f8c8a000
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 31096832 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9b4a400 session 0x5562f811e780
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8e93000/0x0/0x4ffc00000, data 0x2706e34/0x27db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369291 data_alloc: 234881024 data_used: 18644992
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8e93000/0x0/0x4ffc00000, data 0x2706e34/0x27db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369291 data_alloc: 234881024 data_used: 18644992
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.266865730s of 14.365548134s, submitted: 11
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f72c0800 session 0x5562fa586000
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8e69000/0x0/0x4ffc00000, data 0x2730e34/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 30728192 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 30720000 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 30687232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372069 data_alloc: 234881024 data_used: 18657280
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 30687232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 30162944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 30162944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8e67000/0x0/0x4ffc00000, data 0x2731e34/0x2806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 30162944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 30162944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391879 data_alloc: 234881024 data_used: 20267008
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 29458432 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 29458432 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.909446716s of 10.056637764s, submitted: 27
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f72c1400 session 0x5562f9fb0d20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562fd0df000 session 0x5562f8c87e00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 30121984 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a39800 session 0x5562f9b0be00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 30121984 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 30777344 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419525 data_alloc: 234881024 data_used: 20275200
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 30777344 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420501 data_alloc: 234881024 data_used: 20344832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420501 data_alloc: 234881024 data_used: 20344832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420501 data_alloc: 234881024 data_used: 20344832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420501 data_alloc: 234881024 data_used: 20344832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 30752768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.711545944s of 24.771131516s, submitted: 23
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 30744576 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 36192256 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7bc2000/0x0/0x4ffc00000, data 0x39d6e44/0x3aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 36077568 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0000 session 0x5562f9fc4f00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1587372 data_alloc: 234881024 data_used: 21413888
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7743000/0x0/0x4ffc00000, data 0x3e519c1/0x3f28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1588000 data_alloc: 234881024 data_used: 21422080
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 36388864 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.799458504s of 11.224850655s, submitted: 62
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7743000/0x0/0x4ffc00000, data 0x3e519c1/0x3f28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 36315136 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7744000/0x0/0x4ffc00000, data 0x3e519c1/0x3f28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 36208640 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1592928 data_alloc: 234881024 data_used: 22020096
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 36208640 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7744000/0x0/0x4ffc00000, data 0x3e519c1/0x3f28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 36200448 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c0800 session 0x5562fa4c3e00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562fa4cdc20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a39800 session 0x5562f9322960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 36200448 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0000 session 0x5562f9a5a000
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fd0df000 session 0x5562f9a5b2c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 24469504 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c0800 session 0x5562f7d46f00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562f88ecb40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679610 data_alloc: 234881024 data_used: 34021376
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7135000/0x0/0x4ffc00000, data 0x44629c1/0x4539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f70b5000/0x0/0x4ffc00000, data 0x44e29c1/0x45b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.434187889s of 11.577630043s, submitted: 17
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679862 data_alloc: 234881024 data_used: 34021376
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f70aa000/0x0/0x4ffc00000, data 0x44e99c1/0x45c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f70aa000/0x0/0x4ffc00000, data 0x44e99c1/0x45c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f70aa000/0x0/0x4ffc00000, data 0x44e99c1/0x45c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679744 data_alloc: 234881024 data_used: 34017280
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 24412160 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 24412160 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a39000 session 0x5562f9fb10e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a38400 session 0x5562fa38fa40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 24403968 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a39800 session 0x5562fabd7680
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f70ae000/0x0/0x4ffc00000, data 0x44e99c1/0x45c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 25903104 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 129441792 unmapped: 25862144 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1620791 data_alloc: 234881024 data_used: 34242560
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130326528 unmapped: 24977408 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x3da992c/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x3da992c/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1640631 data_alloc: 251658240 data_used: 36900864
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x3da992c/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x3da992c/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x3da992c/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a38800 session 0x5562fa5874a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a38c00 session 0x5562f87ee1e0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.382255554s of 18.596866608s, submitted: 43
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131489792 unmapped: 23814144 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562f773fc20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 24985600 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 24985600 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f820a000/0x0/0x4ffc00000, data 0x339191c/0x3464000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1534721 data_alloc: 234881024 data_used: 34181120
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f99e4c00 session 0x5562fa38ef00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 24985600 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fd490400 session 0x5562fa38f4a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562fa38e000
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d3000/0x0/0x4ffc00000, data 0x2bc891c/0x2c9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1454986 data_alloc: 234881024 data_used: 32808960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1454986 data_alloc: 234881024 data_used: 32808960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1454986 data_alloc: 234881024 data_used: 32808960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1454986 data_alloc: 234881024 data_used: 32808960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.527589798s of 22.740203857s, submitted: 51
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 20209664 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134627328 unmapped: 20676608 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f75000/0x0/0x4ffc00000, data 0x362790c/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1541956 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362f90c/0x3701000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.419891357s of 37.711841583s, submitted: 78
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1540412 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1540412 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1540412 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1540412 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1540412 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134742016 unmapped: 20561920 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.938188553s of 25.944890976s, submitted: 1
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f99e4c00 session 0x5562f9fae960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1561132 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1561132 data_alloc: 234881024 data_used: 33357824
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.308761597s of 43.364856720s, submitted: 9
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 20078592 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135733248 unmapped: 19570688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 19415040 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8e000/0x0/0x4ffc00000, data 0x3b0690c/0x3bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1600034 data_alloc: 234881024 data_used: 33996800
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8c000/0x0/0x4ffc00000, data 0x3b0890c/0x3bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1600050 data_alloc: 234881024 data_used: 33996800
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8c000/0x0/0x4ffc00000, data 0x3b0890c/0x3bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1600050 data_alloc: 234881024 data_used: 33996800
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8c000/0x0/0x4ffc00000, data 0x3b0890c/0x3bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1600050 data_alloc: 234881024 data_used: 33996800
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8c000/0x0/0x4ffc00000, data 0x3b0890c/0x3bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8c000/0x0/0x4ffc00000, data 0x3b0890c/0x3bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1600050 data_alloc: 234881024 data_used: 33996800
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0800 session 0x5562f79efa40
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0c00 session 0x5562f88052c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb1000 session 0x5562fa4c3680
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb1000 session 0x5562fbef6960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.942050934s of 25.233228683s, submitted: 51
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562f88e8d20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136216576 unmapped: 22765568 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f99e4c00 session 0x5562fbef7e00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0800 session 0x5562f88ec5a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0c00 session 0x5562facfef00
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0c00 session 0x5562fa3d8960
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7135000/0x0/0x4ffc00000, data 0x446597e/0x4539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7135000/0x0/0x4ffc00000, data 0x446597e/0x4539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1669625 data_alloc: 234881024 data_used: 33996800
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562f9faed20
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7134000/0x0/0x4ffc00000, data 0x44659a1/0x453a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 22749184 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1670578 data_alloc: 234881024 data_used: 34000896
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136855552 unmapped: 22126592 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 18882560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 15949824 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 15941632 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7132000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 15941632 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1739858 data_alloc: 251658240 data_used: 43806720
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 15941632 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 15941632 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7132000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7132000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 15933440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 15933440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.195671082s of 18.394033432s, submitted: 36
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143220736 unmapped: 15761408 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742098 data_alloc: 251658240 data_used: 43798528
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143220736 unmapped: 15761408 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143220736 unmapped: 15761408 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143253504 unmapped: 15728640 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143253504 unmapped: 15728640 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143253504 unmapped: 15728640 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742098 data_alloc: 251658240 data_used: 43798528
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143253504 unmapped: 15728640 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143253504 unmapped: 15728640 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742098 data_alloc: 251658240 data_used: 43798528
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742098 data_alloc: 251658240 data_used: 43798528
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742098 data_alloc: 251658240 data_used: 43798528
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143278080 unmapped: 15704064 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143278080 unmapped: 15704064 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143278080 unmapped: 15704064 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143278080 unmapped: 15704064 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742578 data_alloc: 251658240 data_used: 43810816
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.829374313s of 25.854768753s, submitted: 13
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144539648 unmapped: 14442496 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f6e8f000/0x0/0x4ffc00000, data 0x470a9a1/0x47df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144596992 unmapped: 14385152 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144850944 unmapped: 14131200 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 14041088 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1776448 data_alloc: 251658240 data_used: 44314624
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f6e7e000/0x0/0x4ffc00000, data 0x471b9a1/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f6e7e000/0x0/0x4ffc00000, data 0x471b9a1/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:16:38 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1776448 data_alloc: 251658240 data_used: 44314624
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f6e7e000/0x0/0x4ffc00000, data 0x471b9a1/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.517509460s of 10.725372314s, submitted: 41
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f99e4c00 session 0x5562fbef72c0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f6e7e000/0x0/0x4ffc00000, data 0x471b9a1/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139083776 unmapped: 19898368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0800 session 0x5562f7d465a0
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139083776 unmapped: 19898368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:16:38 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a92000/0x0/0x4ffc00000, data 0x3b0990c/0x3bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:16:38 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139083776 unmapped: 19898368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:21:07 compute-0 rsyslogd[188590]: imjournal: 16545 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  3 19:21:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:21:07 compute-0 podman[484526]: 2025-12-03 19:21:07.678435812 +0000 UTC m=+0.085330408 container create 78146431f281fbbdb330cc1fbab34f56f411614d203be2529f54a0d8b83eda73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:21:07 compute-0 podman[484526]: 2025-12-03 19:21:07.646019171 +0000 UTC m=+0.052913797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:21:07 compute-0 systemd[1]: Started libpod-conmon-78146431f281fbbdb330cc1fbab34f56f411614d203be2529f54a0d8b83eda73.scope.
Dec  3 19:21:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:21:07 compute-0 podman[484526]: 2025-12-03 19:21:07.857423328 +0000 UTC m=+0.264317964 container init 78146431f281fbbdb330cc1fbab34f56f411614d203be2529f54a0d8b83eda73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 19:21:07 compute-0 podman[484526]: 2025-12-03 19:21:07.876332543 +0000 UTC m=+0.283227129 container start 78146431f281fbbdb330cc1fbab34f56f411614d203be2529f54a0d8b83eda73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec  3 19:21:07 compute-0 podman[484526]: 2025-12-03 19:21:07.88327426 +0000 UTC m=+0.290168846 container attach 78146431f281fbbdb330cc1fbab34f56f411614d203be2529f54a0d8b83eda73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:21:07 compute-0 competent_satoshi[484543]: 167 167
Dec  3 19:21:07 compute-0 systemd[1]: libpod-78146431f281fbbdb330cc1fbab34f56f411614d203be2529f54a0d8b83eda73.scope: Deactivated successfully.
Dec  3 19:21:07 compute-0 podman[484526]: 2025-12-03 19:21:07.889607794 +0000 UTC m=+0.296502430 container died 78146431f281fbbdb330cc1fbab34f56f411614d203be2529f54a0d8b83eda73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Dec  3 19:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-795ae0619c88241ddc98484a1e852595269acd5621231b5f70f7e00a0411ed5d-merged.mount: Deactivated successfully.
Dec  3 19:21:07 compute-0 podman[484526]: 2025-12-03 19:21:07.961631579 +0000 UTC m=+0.368526135 container remove 78146431f281fbbdb330cc1fbab34f56f411614d203be2529f54a0d8b83eda73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Dec  3 19:21:07 compute-0 systemd[1]: libpod-conmon-78146431f281fbbdb330cc1fbab34f56f411614d203be2529f54a0d8b83eda73.scope: Deactivated successfully.
Dec  3 19:21:08 compute-0 podman[484566]: 2025-12-03 19:21:08.259925041 +0000 UTC m=+0.090939084 container create e76e05d592d94f1ad085cd8ee3fef811c5965f0faac21103e57ac9fd6c7251d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:21:08 compute-0 podman[484566]: 2025-12-03 19:21:08.217876997 +0000 UTC m=+0.048891100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:21:08 compute-0 systemd[1]: Started libpod-conmon-e76e05d592d94f1ad085cd8ee3fef811c5965f0faac21103e57ac9fd6c7251d8.scope.
Dec  3 19:21:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2266d32a1d8b87a2ff73979dde1ac04795898f1e91e1bd949a16282c0001a5d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2266d32a1d8b87a2ff73979dde1ac04795898f1e91e1bd949a16282c0001a5d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2266d32a1d8b87a2ff73979dde1ac04795898f1e91e1bd949a16282c0001a5d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2266d32a1d8b87a2ff73979dde1ac04795898f1e91e1bd949a16282c0001a5d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:21:08 compute-0 podman[484566]: 2025-12-03 19:21:08.45854807 +0000 UTC m=+0.289562133 container init e76e05d592d94f1ad085cd8ee3fef811c5965f0faac21103e57ac9fd6c7251d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:21:08 compute-0 podman[484566]: 2025-12-03 19:21:08.471226005 +0000 UTC m=+0.302240018 container start e76e05d592d94f1ad085cd8ee3fef811c5965f0faac21103e57ac9fd6c7251d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:21:08 compute-0 podman[484566]: 2025-12-03 19:21:08.476675797 +0000 UTC m=+0.307689820 container attach e76e05d592d94f1ad085cd8ee3fef811c5965f0faac21103e57ac9fd6c7251d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 19:21:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:08 compute-0 nova_compute[348325]: 2025-12-03 19:21:08.831 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:09 compute-0 nova_compute[348325]: 2025-12-03 19:21:09.195 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:09 compute-0 friendly_boyd[484582]: {
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:    "0": [
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:        {
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "devices": [
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "/dev/loop3"
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            ],
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_name": "ceph_lv0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_size": "21470642176",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "name": "ceph_lv0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "tags": {
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.cluster_name": "ceph",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.crush_device_class": "",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.encrypted": "0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.osd_id": "0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.type": "block",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.vdo": "0"
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            },
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "type": "block",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "vg_name": "ceph_vg0"
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:        }
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:    ],
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:    "1": [
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:        {
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "devices": [
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "/dev/loop4"
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            ],
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_name": "ceph_lv1",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_size": "21470642176",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "name": "ceph_lv1",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "tags": {
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.cluster_name": "ceph",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.crush_device_class": "",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.encrypted": "0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.osd_id": "1",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.type": "block",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.vdo": "0"
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            },
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "type": "block",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "vg_name": "ceph_vg1"
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:        }
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:    ],
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:    "2": [
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:        {
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "devices": [
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "/dev/loop5"
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            ],
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_name": "ceph_lv2",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_size": "21470642176",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "name": "ceph_lv2",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "tags": {
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.cluster_name": "ceph",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.crush_device_class": "",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.encrypted": "0",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.osd_id": "2",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.type": "block",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:                "ceph.vdo": "0"
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            },
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "type": "block",
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:            "vg_name": "ceph_vg2"
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:        }
Dec  3 19:21:09 compute-0 friendly_boyd[484582]:    ]
Dec  3 19:21:09 compute-0 friendly_boyd[484582]: }
Dec  3 19:21:09 compute-0 systemd[1]: libpod-e76e05d592d94f1ad085cd8ee3fef811c5965f0faac21103e57ac9fd6c7251d8.scope: Deactivated successfully.
Dec  3 19:21:09 compute-0 podman[484566]: 2025-12-03 19:21:09.369817239 +0000 UTC m=+1.200831242 container died e76e05d592d94f1ad085cd8ee3fef811c5965f0faac21103e57ac9fd6c7251d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 19:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2266d32a1d8b87a2ff73979dde1ac04795898f1e91e1bd949a16282c0001a5d2-merged.mount: Deactivated successfully.
Dec  3 19:21:09 compute-0 podman[484566]: 2025-12-03 19:21:09.95095915 +0000 UTC m=+1.781973193 container remove e76e05d592d94f1ad085cd8ee3fef811c5965f0faac21103e57ac9fd6c7251d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:21:09 compute-0 systemd[1]: libpod-conmon-e76e05d592d94f1ad085cd8ee3fef811c5965f0faac21103e57ac9fd6c7251d8.scope: Deactivated successfully.
Dec  3 19:21:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:11 compute-0 podman[484745]: 2025-12-03 19:21:11.215834625 +0000 UTC m=+0.127833783 container create d9b29071768d68658fbeb737e3df5cb80c3555fe085f48d3e22eab398ae24bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:21:11 compute-0 podman[484745]: 2025-12-03 19:21:11.148994143 +0000 UTC m=+0.060993321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:21:11 compute-0 systemd[1]: Started libpod-conmon-d9b29071768d68658fbeb737e3df5cb80c3555fe085f48d3e22eab398ae24bd6.scope.
Dec  3 19:21:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:21:11 compute-0 podman[484745]: 2025-12-03 19:21:11.387020332 +0000 UTC m=+0.299019500 container init d9b29071768d68658fbeb737e3df5cb80c3555fe085f48d3e22eab398ae24bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 19:21:11 compute-0 podman[484745]: 2025-12-03 19:21:11.397813131 +0000 UTC m=+0.309812269 container start d9b29071768d68658fbeb737e3df5cb80c3555fe085f48d3e22eab398ae24bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 19:21:11 compute-0 suspicious_mclean[484760]: 167 167
Dec  3 19:21:11 compute-0 systemd[1]: libpod-d9b29071768d68658fbeb737e3df5cb80c3555fe085f48d3e22eab398ae24bd6.scope: Deactivated successfully.
Dec  3 19:21:11 compute-0 podman[484745]: 2025-12-03 19:21:11.427748654 +0000 UTC m=+0.339747792 container attach d9b29071768d68658fbeb737e3df5cb80c3555fe085f48d3e22eab398ae24bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:21:11 compute-0 podman[484745]: 2025-12-03 19:21:11.428280276 +0000 UTC m=+0.340279414 container died d9b29071768d68658fbeb737e3df5cb80c3555fe085f48d3e22eab398ae24bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc5e37beff2e3f896ed4176e095a6370a5f344b7c8478a43374c8c50ae4ec901-merged.mount: Deactivated successfully.
Dec  3 19:21:12 compute-0 podman[484745]: 2025-12-03 19:21:12.0713348 +0000 UTC m=+0.983333958 container remove d9b29071768d68658fbeb737e3df5cb80c3555fe085f48d3e22eab398ae24bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:21:12 compute-0 systemd[1]: libpod-conmon-d9b29071768d68658fbeb737e3df5cb80c3555fe085f48d3e22eab398ae24bd6.scope: Deactivated successfully.
Dec  3 19:21:12 compute-0 podman[484777]: 2025-12-03 19:21:12.150606241 +0000 UTC m=+0.305431685 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 19:21:12 compute-0 podman[484778]: 2025-12-03 19:21:12.156183585 +0000 UTC m=+0.309331989 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec  3 19:21:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:21:12 compute-0 podman[484829]: 2025-12-03 19:21:12.315925876 +0000 UTC m=+0.053716185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:21:12 compute-0 podman[484829]: 2025-12-03 19:21:12.536769181 +0000 UTC m=+0.274559480 container create debd904998608fb75529a49d98ba8dbb5c750d8ed0773c12fbe209af667b9a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:21:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:12 compute-0 systemd[1]: Started libpod-conmon-debd904998608fb75529a49d98ba8dbb5c750d8ed0773c12fbe209af667b9a2a.scope.
Dec  3 19:21:12 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c861d2c7b1fcd11bee68c5c53773e5b22357b85ece49e233d278db4d601629f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c861d2c7b1fcd11bee68c5c53773e5b22357b85ece49e233d278db4d601629f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c861d2c7b1fcd11bee68c5c53773e5b22357b85ece49e233d278db4d601629f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c861d2c7b1fcd11bee68c5c53773e5b22357b85ece49e233d278db4d601629f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:21:12 compute-0 podman[484829]: 2025-12-03 19:21:12.810211413 +0000 UTC m=+0.548001692 container init debd904998608fb75529a49d98ba8dbb5c750d8ed0773c12fbe209af667b9a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:21:12 compute-0 podman[484829]: 2025-12-03 19:21:12.829129059 +0000 UTC m=+0.566919358 container start debd904998608fb75529a49d98ba8dbb5c750d8ed0773c12fbe209af667b9a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 19:21:12 compute-0 podman[484829]: 2025-12-03 19:21:12.971917512 +0000 UTC m=+0.709707811 container attach debd904998608fb75529a49d98ba8dbb5c750d8ed0773c12fbe209af667b9a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.264 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.265 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:21:13.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:21:13 compute-0 nova_compute[348325]: 2025-12-03 19:21:13.836 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:13 compute-0 eloquent_wing[484846]: {
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "osd_id": 1,
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "type": "bluestore"
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:    },
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "osd_id": 2,
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "type": "bluestore"
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:    },
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "osd_id": 0,
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:        "type": "bluestore"
Dec  3 19:21:13 compute-0 eloquent_wing[484846]:    }
Dec  3 19:21:13 compute-0 eloquent_wing[484846]: }
Dec  3 19:21:13 compute-0 systemd[1]: libpod-debd904998608fb75529a49d98ba8dbb5c750d8ed0773c12fbe209af667b9a2a.scope: Deactivated successfully.
Dec  3 19:21:13 compute-0 podman[484829]: 2025-12-03 19:21:13.881641233 +0000 UTC m=+1.619431502 container died debd904998608fb75529a49d98ba8dbb5c750d8ed0773c12fbe209af667b9a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wing, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec  3 19:21:13 compute-0 systemd[1]: libpod-debd904998608fb75529a49d98ba8dbb5c750d8ed0773c12fbe209af667b9a2a.scope: Consumed 1.059s CPU time.
Dec  3 19:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c861d2c7b1fcd11bee68c5c53773e5b22357b85ece49e233d278db4d601629f-merged.mount: Deactivated successfully.
Dec  3 19:21:13 compute-0 podman[484829]: 2025-12-03 19:21:13.987747112 +0000 UTC m=+1.725537411 container remove debd904998608fb75529a49d98ba8dbb5c750d8ed0773c12fbe209af667b9a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:21:14 compute-0 systemd[1]: libpod-conmon-debd904998608fb75529a49d98ba8dbb5c750d8ed0773c12fbe209af667b9a2a.scope: Deactivated successfully.
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:21:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:21:14 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:21:14 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:21:14 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev a89bb6b0-13d5-4161-b05e-6b8f2efc5f92 does not exist
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 435a8570-ff8f-488c-8c47-6afc5aa59721 does not exist
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.083109) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789674083207, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1315, "num_deletes": 251, "total_data_size": 2049421, "memory_usage": 2077680, "flush_reason": "Manual Compaction"}
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:21:14
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', '.mgr', 'volumes', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789674107347, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 2018901, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49734, "largest_seqno": 51048, "table_properties": {"data_size": 2012634, "index_size": 3534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12994, "raw_average_key_size": 19, "raw_value_size": 2000130, "raw_average_value_size": 3053, "num_data_blocks": 159, "num_entries": 655, "num_filter_entries": 655, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764789537, "oldest_key_time": 1764789537, "file_creation_time": 1764789674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 24437 microseconds, and 10990 cpu microseconds.
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.107538) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 2018901 bytes OK
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.107567) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.110615) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.110654) EVENT_LOG_v1 {"time_micros": 1764789674110644, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.110679) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2043541, prev total WAL file size 2043541, number of live WAL files 2.
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.112043) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(1971KB)], [119(7204KB)]
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789674112151, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 9396144, "oldest_snapshot_seqno": -1}
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 6545 keys, 7653775 bytes, temperature: kUnknown
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789674182670, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 7653775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7614211, "index_size": 22108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 171077, "raw_average_key_size": 26, "raw_value_size": 7499982, "raw_average_value_size": 1145, "num_data_blocks": 872, "num_entries": 6545, "num_filter_entries": 6545, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764789674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.182888) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 7653775 bytes
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.184922) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.1 rd, 108.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.4) write-amplify(3.8) OK, records in: 7059, records dropped: 514 output_compression: NoCompression
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.184936) EVENT_LOG_v1 {"time_micros": 1764789674184929, "job": 72, "event": "compaction_finished", "compaction_time_micros": 70575, "compaction_time_cpu_micros": 39011, "output_level": 6, "num_output_files": 1, "total_output_size": 7653775, "num_input_records": 7059, "num_output_records": 6545, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789674185393, "job": 72, "event": "table_file_deletion", "file_number": 121}
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789674186673, "job": 72, "event": "table_file_deletion", "file_number": 119}
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.111747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.186942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.186950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.186953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.186956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:21:14 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:21:14.186959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:21:14 compute-0 nova_compute[348325]: 2025-12-03 19:21:14.199 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:21:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:21:15 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:21:15 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:21:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:21:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:18 compute-0 nova_compute[348325]: 2025-12-03 19:21:18.840 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:19 compute-0 nova_compute[348325]: 2025-12-03 19:21:19.201 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:21:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:22 compute-0 podman[484944]: 2025-12-03 19:21:22.98154084 +0000 UTC m=+0.136341458 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:21:22 compute-0 podman[484946]: 2025-12-03 19:21:22.982154665 +0000 UTC m=+0.130067637 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41)
Dec  3 19:21:22 compute-0 podman[484945]: 2025-12-03 19:21:22.994571744 +0000 UTC m=+0.147063046 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 19:21:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:21:23.383 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:21:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:21:23.384 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:21:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:21:23.384 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:21:23 compute-0 nova_compute[348325]: 2025-12-03 19:21:23.846 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:24 compute-0 nova_compute[348325]: 2025-12-03 19:21:24.204 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:21:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:21:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:21:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:28 compute-0 nova_compute[348325]: 2025-12-03 19:21:28.850 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:29 compute-0 nova_compute[348325]: 2025-12-03 19:21:29.208 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:29 compute-0 podman[158200]: time="2025-12-03T19:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:21:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:21:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8201 "" "Go-http-client/1.1"
Dec  3 19:21:29 compute-0 podman[485008]: 2025-12-03 19:21:29.951769642 +0000 UTC m=+0.108842085 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-type=git, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, release=1214.1726694543, config_id=edpm, vendor=Red Hat, Inc.)
Dec  3 19:21:29 compute-0 podman[485009]: 2025-12-03 19:21:29.970397391 +0000 UTC m=+0.122517514 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 19:21:29 compute-0 podman[485010]: 2025-12-03 19:21:29.973052515 +0000 UTC m=+0.119746287 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent)
Dec  3 19:21:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:31 compute-0 openstack_network_exporter[365222]: ERROR   19:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:21:31 compute-0 openstack_network_exporter[365222]: ERROR   19:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:21:31 compute-0 openstack_network_exporter[365222]: ERROR   19:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:21:31 compute-0 openstack_network_exporter[365222]: ERROR   19:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:21:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:21:31 compute-0 openstack_network_exporter[365222]: ERROR   19:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:21:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:21:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:21:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:33 compute-0 nova_compute[348325]: 2025-12-03 19:21:33.854 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:34 compute-0 nova_compute[348325]: 2025-12-03 19:21:34.211 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:36 compute-0 podman[485066]: 2025-12-03 19:21:36.982601331 +0000 UTC m=+0.136500162 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 19:21:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:21:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:21:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1831023876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:21:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:21:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1831023876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:21:38 compute-0 nova_compute[348325]: 2025-12-03 19:21:38.218 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:21:38 compute-0 nova_compute[348325]: 2025-12-03 19:21:38.219 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:21:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:38 compute-0 nova_compute[348325]: 2025-12-03 19:21:38.859 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:39 compute-0 nova_compute[348325]: 2025-12-03 19:21:39.214 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:40 compute-0 nova_compute[348325]: 2025-12-03 19:21:40.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:21:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:41 compute-0 nova_compute[348325]: 2025-12-03 19:21:41.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:21:41 compute-0 nova_compute[348325]: 2025-12-03 19:21:41.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:21:41 compute-0 nova_compute[348325]: 2025-12-03 19:21:41.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:21:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:21:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:42 compute-0 podman[485092]: 2025-12-03 19:21:42.970333757 +0000 UTC m=+0.118621741 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 19:21:42 compute-0 podman[485091]: 2025-12-03 19:21:42.992840069 +0000 UTC m=+0.149531456 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 19:21:43 compute-0 nova_compute[348325]: 2025-12-03 19:21:43.652 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 19:21:43 compute-0 nova_compute[348325]: 2025-12-03 19:21:43.652 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:21:43 compute-0 nova_compute[348325]: 2025-12-03 19:21:43.652 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:21:43 compute-0 nova_compute[348325]: 2025-12-03 19:21:43.861 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:21:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:21:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:21:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:21:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:21:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:21:44 compute-0 nova_compute[348325]: 2025-12-03 19:21:44.218 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:45 compute-0 nova_compute[348325]: 2025-12-03 19:21:45.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:21:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:21:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:48 compute-0 nova_compute[348325]: 2025-12-03 19:21:48.866 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:49 compute-0 nova_compute[348325]: 2025-12-03 19:21:49.219 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:50 compute-0 nova_compute[348325]: 2025-12-03 19:21:50.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:21:50 compute-0 nova_compute[348325]: 2025-12-03 19:21:50.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:21:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:21:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:53 compute-0 nova_compute[348325]: 2025-12-03 19:21:53.869 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:53 compute-0 podman[485135]: 2025-12-03 19:21:53.973327627 +0000 UTC m=+0.123896229 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 19:21:53 compute-0 podman[485134]: 2025-12-03 19:21:53.973780987 +0000 UTC m=+0.126624154 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 19:21:53 compute-0 podman[485136]: 2025-12-03 19:21:53.980078738 +0000 UTC m=+0.132323890 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, distribution-scope=public, version=9.6, architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 19:21:54 compute-0 nova_compute[348325]: 2025-12-03 19:21:54.223 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:21:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:21:58 compute-0 nova_compute[348325]: 2025-12-03 19:21:58.876 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:59 compute-0 nova_compute[348325]: 2025-12-03 19:21:59.226 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:21:59 compute-0 podman[158200]: time="2025-12-03T19:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:21:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:21:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8195 "" "Go-http-client/1.1"
Dec  3 19:22:00 compute-0 nova_compute[348325]: 2025-12-03 19:22:00.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:22:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:00 compute-0 nova_compute[348325]: 2025-12-03 19:22:00.651 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:22:00 compute-0 nova_compute[348325]: 2025-12-03 19:22:00.652 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:22:00 compute-0 nova_compute[348325]: 2025-12-03 19:22:00.653 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:22:00 compute-0 nova_compute[348325]: 2025-12-03 19:22:00.653 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:22:00 compute-0 nova_compute[348325]: 2025-12-03 19:22:00.653 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:22:00 compute-0 podman[485214]: 2025-12-03 19:22:00.956708847 +0000 UTC m=+0.111514460 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec  3 19:22:00 compute-0 podman[485215]: 2025-12-03 19:22:00.971062063 +0000 UTC m=+0.123147020 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  3 19:22:00 compute-0 podman[485206]: 2025-12-03 19:22:00.997908131 +0000 UTC m=+0.155245894 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, distribution-scope=public, version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 19:22:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:22:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3363445550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:22:01 compute-0 nova_compute[348325]: 2025-12-03 19:22:01.168 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:22:01 compute-0 openstack_network_exporter[365222]: ERROR   19:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:22:01 compute-0 openstack_network_exporter[365222]: ERROR   19:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:22:01 compute-0 openstack_network_exporter[365222]: ERROR   19:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:22:01 compute-0 openstack_network_exporter[365222]: ERROR   19:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:22:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:22:01 compute-0 openstack_network_exporter[365222]: ERROR   19:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:22:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:22:01 compute-0 nova_compute[348325]: 2025-12-03 19:22:01.700 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:22:01 compute-0 nova_compute[348325]: 2025-12-03 19:22:01.701 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3961MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:22:01 compute-0 nova_compute[348325]: 2025-12-03 19:22:01.702 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:22:01 compute-0 nova_compute[348325]: 2025-12-03 19:22:01.702 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:22:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:02 compute-0 nova_compute[348325]: 2025-12-03 19:22:02.572 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:22:02 compute-0 nova_compute[348325]: 2025-12-03 19:22:02.572 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:22:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:02 compute-0 nova_compute[348325]: 2025-12-03 19:22:02.918 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:22:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:22:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046642400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:22:03 compute-0 nova_compute[348325]: 2025-12-03 19:22:03.394 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:22:03 compute-0 nova_compute[348325]: 2025-12-03 19:22:03.406 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:22:03 compute-0 nova_compute[348325]: 2025-12-03 19:22:03.428 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:22:03 compute-0 nova_compute[348325]: 2025-12-03 19:22:03.430 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:22:03 compute-0 nova_compute[348325]: 2025-12-03 19:22:03.430 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:22:03 compute-0 nova_compute[348325]: 2025-12-03 19:22:03.880 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:04 compute-0 nova_compute[348325]: 2025-12-03 19:22:04.229 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:07 compute-0 podman[485299]: 2025-12-03 19:22:07.963364337 +0000 UTC m=+0.114706676 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 19:22:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:08 compute-0 nova_compute[348325]: 2025-12-03 19:22:08.885 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:09 compute-0 nova_compute[348325]: 2025-12-03 19:22:09.233 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:13 compute-0 nova_compute[348325]: 2025-12-03 19:22:13.890 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:13 compute-0 podman[485325]: 2025-12-03 19:22:13.961998515 +0000 UTC m=+0.124761598 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:22:14 compute-0 podman[485324]: 2025-12-03 19:22:14.011031037 +0000 UTC m=+0.181196110 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:22:14
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'images']
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:22:14 compute-0 nova_compute[348325]: 2025-12-03 19:22:14.234 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:22:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:22:16 compute-0 podman[485538]: 2025-12-03 19:22:16.2925254 +0000 UTC m=+0.689472192 container exec c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:22:16 compute-0 podman[485538]: 2025-12-03 19:22:16.436909321 +0000 UTC m=+0.833856133 container exec_died c4418ca0ee5df95c133db330bc8714b98e7c86be83b29540d0d4d94c3c723743 (image=quay.io/ceph/ceph:v18, name=ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 19:22:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:22:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:22:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:22:17 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:22:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:18 compute-0 nova_compute[348325]: 2025-12-03 19:22:18.893 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:22:18 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:22:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:22:18 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:22:18 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:22:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:22:18 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:22:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:22:19 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev dce7cf20-db29-454c-be9f-da5066da3d65 does not exist
Dec  3 19:22:19 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6e2d45b0-c366-4f47-81f1-271d4469359e does not exist
Dec  3 19:22:19 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 4820ddea-008d-4d6d-a797-eb3de568cb68 does not exist
Dec  3 19:22:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:22:19 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:22:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:22:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:22:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:22:19 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:22:19 compute-0 nova_compute[348325]: 2025-12-03 19:22:19.237 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:22:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:22:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:22:20 compute-0 podman[485953]: 2025-12-03 19:22:20.028206162 +0000 UTC m=+0.030285801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:22:20 compute-0 podman[485953]: 2025-12-03 19:22:20.417528998 +0000 UTC m=+0.419608667 container create 44c7d4229c8723288afdd166c839d3110eb0c4597c91d4e2683eb11c980a523e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haibt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 19:22:20 compute-0 systemd[1]: Started libpod-conmon-44c7d4229c8723288afdd166c839d3110eb0c4597c91d4e2683eb11c980a523e.scope.
Dec  3 19:22:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:22:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:20 compute-0 podman[485953]: 2025-12-03 19:22:20.951854991 +0000 UTC m=+0.953934720 container init 44c7d4229c8723288afdd166c839d3110eb0c4597c91d4e2683eb11c980a523e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 19:22:20 compute-0 podman[485953]: 2025-12-03 19:22:20.969725002 +0000 UTC m=+0.971804671 container start 44c7d4229c8723288afdd166c839d3110eb0c4597c91d4e2683eb11c980a523e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haibt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:22:20 compute-0 unruffled_haibt[485967]: 167 167
Dec  3 19:22:20 compute-0 systemd[1]: libpod-44c7d4229c8723288afdd166c839d3110eb0c4597c91d4e2683eb11c980a523e.scope: Deactivated successfully.
Dec  3 19:22:20 compute-0 conmon[485967]: conmon 44c7d4229c8723288afd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44c7d4229c8723288afdd166c839d3110eb0c4597c91d4e2683eb11c980a523e.scope/container/memory.events
Dec  3 19:22:21 compute-0 podman[485953]: 2025-12-03 19:22:21.301967912 +0000 UTC m=+1.304047631 container attach 44c7d4229c8723288afdd166c839d3110eb0c4597c91d4e2683eb11c980a523e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  3 19:22:21 compute-0 podman[485953]: 2025-12-03 19:22:21.305126698 +0000 UTC m=+1.307206357 container died 44c7d4229c8723288afdd166c839d3110eb0c4597c91d4e2683eb11c980a523e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haibt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:22:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-210aeee97e98be4b5368a4a61aac7e44c795a6cf85aadb3143c124e19631d063-merged.mount: Deactivated successfully.
Dec  3 19:22:21 compute-0 podman[485953]: 2025-12-03 19:22:21.588278994 +0000 UTC m=+1.590358633 container remove 44c7d4229c8723288afdd166c839d3110eb0c4597c91d4e2683eb11c980a523e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 19:22:21 compute-0 systemd[1]: libpod-conmon-44c7d4229c8723288afdd166c839d3110eb0c4597c91d4e2683eb11c980a523e.scope: Deactivated successfully.
Dec  3 19:22:21 compute-0 podman[485991]: 2025-12-03 19:22:21.816760213 +0000 UTC m=+0.032325580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:22:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:22 compute-0 podman[485991]: 2025-12-03 19:22:22.482436991 +0000 UTC m=+0.698002338 container create 6c338160297a74c877e9d59d50ec873c63dc2b871b4c62c70e96c938b8ea6db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 19:22:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:22 compute-0 systemd[1]: Started libpod-conmon-6c338160297a74c877e9d59d50ec873c63dc2b871b4c62c70e96c938b8ea6db2.scope.
Dec  3 19:22:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef204becc293268b864013b816190e8f0be3e5ac884a7535be6b59919a21ca1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef204becc293268b864013b816190e8f0be3e5ac884a7535be6b59919a21ca1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef204becc293268b864013b816190e8f0be3e5ac884a7535be6b59919a21ca1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef204becc293268b864013b816190e8f0be3e5ac884a7535be6b59919a21ca1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef204becc293268b864013b816190e8f0be3e5ac884a7535be6b59919a21ca1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:22 compute-0 podman[485991]: 2025-12-03 19:22:22.819326534 +0000 UTC m=+1.034891961 container init 6c338160297a74c877e9d59d50ec873c63dc2b871b4c62c70e96c938b8ea6db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:22:22 compute-0 podman[485991]: 2025-12-03 19:22:22.837244606 +0000 UTC m=+1.052809963 container start 6c338160297a74c877e9d59d50ec873c63dc2b871b4c62c70e96c938b8ea6db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:22:22 compute-0 podman[485991]: 2025-12-03 19:22:22.846099179 +0000 UTC m=+1.061664616 container attach 6c338160297a74c877e9d59d50ec873c63dc2b871b4c62c70e96c938b8ea6db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 19:22:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:22:23.384 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:22:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:22:23.387 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:22:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:22:23.387 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:22:23 compute-0 nova_compute[348325]: 2025-12-03 19:22:23.897 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:23 compute-0 elastic_mahavira[486007]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:22:23 compute-0 elastic_mahavira[486007]: --> relative data size: 1.0
Dec  3 19:22:23 compute-0 elastic_mahavira[486007]: --> All data devices are unavailable
Dec  3 19:22:24 compute-0 systemd[1]: libpod-6c338160297a74c877e9d59d50ec873c63dc2b871b4c62c70e96c938b8ea6db2.scope: Deactivated successfully.
Dec  3 19:22:24 compute-0 podman[485991]: 2025-12-03 19:22:24.025267628 +0000 UTC m=+2.240833055 container died 6c338160297a74c877e9d59d50ec873c63dc2b871b4c62c70e96c938b8ea6db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 19:22:24 compute-0 systemd[1]: libpod-6c338160297a74c877e9d59d50ec873c63dc2b871b4c62c70e96c938b8ea6db2.scope: Consumed 1.146s CPU time.
Dec  3 19:22:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ef204becc293268b864013b816190e8f0be3e5ac884a7535be6b59919a21ca1-merged.mount: Deactivated successfully.
Dec  3 19:22:24 compute-0 podman[485991]: 2025-12-03 19:22:24.137383201 +0000 UTC m=+2.352948548 container remove 6c338160297a74c877e9d59d50ec873c63dc2b871b4c62c70e96c938b8ea6db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Dec  3 19:22:24 compute-0 systemd[1]: libpod-conmon-6c338160297a74c877e9d59d50ec873c63dc2b871b4c62c70e96c938b8ea6db2.scope: Deactivated successfully.
Dec  3 19:22:24 compute-0 podman[486040]: 2025-12-03 19:22:24.193919993 +0000 UTC m=+0.115034964 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 19:22:24 compute-0 podman[486037]: 2025-12-03 19:22:24.201030736 +0000 UTC m=+0.114264197 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 19:22:24 compute-0 podman[486046]: 2025-12-03 19:22:24.209246593 +0000 UTC m=+0.129484292 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 19:22:24 compute-0 nova_compute[348325]: 2025-12-03 19:22:24.239 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:22:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:22:25 compute-0 podman[486243]: 2025-12-03 19:22:25.130803691 +0000 UTC m=+0.046775359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:22:25 compute-0 podman[486243]: 2025-12-03 19:22:25.803008227 +0000 UTC m=+0.718979845 container create ee4642973e48570ec105f57fec52c206b0b7c1d5084ee4bd348391828db30c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:22:25 compute-0 systemd[1]: Started libpod-conmon-ee4642973e48570ec105f57fec52c206b0b7c1d5084ee4bd348391828db30c0a.scope.
Dec  3 19:22:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:22:25 compute-0 podman[486243]: 2025-12-03 19:22:25.962076131 +0000 UTC m=+0.878047729 container init ee4642973e48570ec105f57fec52c206b0b7c1d5084ee4bd348391828db30c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:22:25 compute-0 podman[486243]: 2025-12-03 19:22:25.981378147 +0000 UTC m=+0.897349765 container start ee4642973e48570ec105f57fec52c206b0b7c1d5084ee4bd348391828db30c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 19:22:25 compute-0 podman[486243]: 2025-12-03 19:22:25.98814531 +0000 UTC m=+0.904116908 container attach ee4642973e48570ec105f57fec52c206b0b7c1d5084ee4bd348391828db30c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 19:22:25 compute-0 agitated_banzai[486260]: 167 167
Dec  3 19:22:25 compute-0 systemd[1]: libpod-ee4642973e48570ec105f57fec52c206b0b7c1d5084ee4bd348391828db30c0a.scope: Deactivated successfully.
Dec  3 19:22:25 compute-0 podman[486243]: 2025-12-03 19:22:25.993920189 +0000 UTC m=+0.909891797 container died ee4642973e48570ec105f57fec52c206b0b7c1d5084ee4bd348391828db30c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:22:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7143f790a9a877cb5487abf94e85bdc85d522c30853a547405f4cb5cef53569e-merged.mount: Deactivated successfully.
Dec  3 19:22:26 compute-0 podman[486243]: 2025-12-03 19:22:26.061692723 +0000 UTC m=+0.977664301 container remove ee4642973e48570ec105f57fec52c206b0b7c1d5084ee4bd348391828db30c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:22:26 compute-0 systemd[1]: libpod-conmon-ee4642973e48570ec105f57fec52c206b0b7c1d5084ee4bd348391828db30c0a.scope: Deactivated successfully.
Dec  3 19:22:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:22:26 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1355 writes, 6218 keys, 1355 commit groups, 1.0 writes per commit group, ingest: 8.74 MB, 0.01 MB/s#012Interval WAL: 1355 writes, 1355 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     43.3      1.48              0.31        36    0.041       0      0       0.0       0.0#012  L6      1/0    7.30 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.2    123.0    101.5      2.65              1.11        35    0.076    194K    19K       0.0       0.0#012 Sum      1/0    7.30 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.2     78.9     80.6      4.14              1.42        71    0.058    194K    19K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6    102.5    104.5      0.47              0.22        10    0.047     34K   2547       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    123.0    101.5      2.65              1.11        35    0.076    194K    19K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     43.5      1.48              0.31        35    0.042       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.8      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.063, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.33 GB write, 0.07 MB/s write, 0.32 GB read, 0.07 MB/s read, 4.1 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55911062f1f0#2 capacity: 304.00 MB usage: 40.44 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000339 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2786,39.02 MB,12.8356%) FilterBlock(72,549.23 KB,0.176435%) IndexBlock(72,899.55 KB,0.288968%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 19:22:26 compute-0 podman[486282]: 2025-12-03 19:22:26.290393497 +0000 UTC m=+0.066731040 container create 208639b56c51890745788e18d33fc83f62c3ce92795500532f097de982e7b478 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:22:26 compute-0 podman[486282]: 2025-12-03 19:22:26.256061919 +0000 UTC m=+0.032399552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:22:26 compute-0 systemd[1]: Started libpod-conmon-208639b56c51890745788e18d33fc83f62c3ce92795500532f097de982e7b478.scope.
Dec  3 19:22:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e14c40ea52417d70ebf669748366a37044e1c3d2aeee4b70ae123585d05635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e14c40ea52417d70ebf669748366a37044e1c3d2aeee4b70ae123585d05635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e14c40ea52417d70ebf669748366a37044e1c3d2aeee4b70ae123585d05635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e14c40ea52417d70ebf669748366a37044e1c3d2aeee4b70ae123585d05635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:26 compute-0 podman[486282]: 2025-12-03 19:22:26.446830769 +0000 UTC m=+0.223168342 container init 208639b56c51890745788e18d33fc83f62c3ce92795500532f097de982e7b478 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 19:22:26 compute-0 podman[486282]: 2025-12-03 19:22:26.472800555 +0000 UTC m=+0.249138108 container start 208639b56c51890745788e18d33fc83f62c3ce92795500532f097de982e7b478 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:22:26 compute-0 podman[486282]: 2025-12-03 19:22:26.478967203 +0000 UTC m=+0.255305006 container attach 208639b56c51890745788e18d33fc83f62c3ce92795500532f097de982e7b478 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 19:22:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:27 compute-0 angry_hermann[486298]: {
Dec  3 19:22:27 compute-0 angry_hermann[486298]:    "0": [
Dec  3 19:22:27 compute-0 angry_hermann[486298]:        {
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "devices": [
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "/dev/loop3"
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            ],
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_name": "ceph_lv0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_size": "21470642176",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "name": "ceph_lv0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "tags": {
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.cluster_name": "ceph",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.crush_device_class": "",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.encrypted": "0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.osd_id": "0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.type": "block",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.vdo": "0"
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            },
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "type": "block",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "vg_name": "ceph_vg0"
Dec  3 19:22:27 compute-0 angry_hermann[486298]:        }
Dec  3 19:22:27 compute-0 angry_hermann[486298]:    ],
Dec  3 19:22:27 compute-0 angry_hermann[486298]:    "1": [
Dec  3 19:22:27 compute-0 angry_hermann[486298]:        {
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "devices": [
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "/dev/loop4"
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            ],
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_name": "ceph_lv1",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_size": "21470642176",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "name": "ceph_lv1",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "tags": {
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.cluster_name": "ceph",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.crush_device_class": "",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.encrypted": "0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.osd_id": "1",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.type": "block",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.vdo": "0"
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            },
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "type": "block",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "vg_name": "ceph_vg1"
Dec  3 19:22:27 compute-0 angry_hermann[486298]:        }
Dec  3 19:22:27 compute-0 angry_hermann[486298]:    ],
Dec  3 19:22:27 compute-0 angry_hermann[486298]:    "2": [
Dec  3 19:22:27 compute-0 angry_hermann[486298]:        {
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "devices": [
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "/dev/loop5"
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            ],
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_name": "ceph_lv2",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_size": "21470642176",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "name": "ceph_lv2",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "tags": {
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.cluster_name": "ceph",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.crush_device_class": "",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.encrypted": "0",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.osd_id": "2",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.type": "block",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:                "ceph.vdo": "0"
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            },
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "type": "block",
Dec  3 19:22:27 compute-0 angry_hermann[486298]:            "vg_name": "ceph_vg2"
Dec  3 19:22:27 compute-0 angry_hermann[486298]:        }
Dec  3 19:22:27 compute-0 angry_hermann[486298]:    ]
Dec  3 19:22:27 compute-0 angry_hermann[486298]: }
Dec  3 19:22:27 compute-0 systemd[1]: libpod-208639b56c51890745788e18d33fc83f62c3ce92795500532f097de982e7b478.scope: Deactivated successfully.
Dec  3 19:22:27 compute-0 podman[486282]: 2025-12-03 19:22:27.400875139 +0000 UTC m=+1.177212712 container died 208639b56c51890745788e18d33fc83f62c3ce92795500532f097de982e7b478 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 19:22:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3e14c40ea52417d70ebf669748366a37044e1c3d2aeee4b70ae123585d05635-merged.mount: Deactivated successfully.
Dec  3 19:22:27 compute-0 podman[486282]: 2025-12-03 19:22:27.58051634 +0000 UTC m=+1.356853913 container remove 208639b56c51890745788e18d33fc83f62c3ce92795500532f097de982e7b478 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:22:27 compute-0 systemd[1]: libpod-conmon-208639b56c51890745788e18d33fc83f62c3ce92795500532f097de982e7b478.scope: Deactivated successfully.
Dec  3 19:22:28 compute-0 nova_compute[348325]: 2025-12-03 19:22:28.003 348329 DEBUG oslo_concurrency.processutils [None req-3ab0b143-b799-409f-aa9a-1bd9bcbfea45 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:22:28 compute-0 nova_compute[348325]: 2025-12-03 19:22:28.036 348329 DEBUG oslo_concurrency.processutils [None req-3ab0b143-b799-409f-aa9a-1bd9bcbfea45 56338958b09445f5af9aa9e4601a1a8a d2770200bdb2436c90142fa2e5ddcd47 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:22:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:28 compute-0 podman[486457]: 2025-12-03 19:22:28.598156964 +0000 UTC m=+0.040402555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:22:28 compute-0 nova_compute[348325]: 2025-12-03 19:22:28.901 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:28 compute-0 podman[486457]: 2025-12-03 19:22:28.994795867 +0000 UTC m=+0.437041408 container create 43aaa38a5dabdf6cce005d03e47c203a50ba0836bc8861eef27081dc23c4a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 19:22:29 compute-0 systemd[1]: Started libpod-conmon-43aaa38a5dabdf6cce005d03e47c203a50ba0836bc8861eef27081dc23c4a33b.scope.
Dec  3 19:22:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:22:29 compute-0 nova_compute[348325]: 2025-12-03 19:22:29.247 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:29 compute-0 podman[486457]: 2025-12-03 19:22:29.567148926 +0000 UTC m=+1.009394537 container init 43aaa38a5dabdf6cce005d03e47c203a50ba0836bc8861eef27081dc23c4a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:22:29 compute-0 podman[486457]: 2025-12-03 19:22:29.577636199 +0000 UTC m=+1.019881750 container start 43aaa38a5dabdf6cce005d03e47c203a50ba0836bc8861eef27081dc23c4a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 19:22:29 compute-0 inspiring_shtern[486472]: 167 167
Dec  3 19:22:29 compute-0 systemd[1]: libpod-43aaa38a5dabdf6cce005d03e47c203a50ba0836bc8861eef27081dc23c4a33b.scope: Deactivated successfully.
Dec  3 19:22:29 compute-0 podman[486457]: 2025-12-03 19:22:29.627014439 +0000 UTC m=+1.069260010 container attach 43aaa38a5dabdf6cce005d03e47c203a50ba0836bc8861eef27081dc23c4a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 19:22:29 compute-0 podman[486457]: 2025-12-03 19:22:29.627920621 +0000 UTC m=+1.070166202 container died 43aaa38a5dabdf6cce005d03e47c203a50ba0836bc8861eef27081dc23c4a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:22:29 compute-0 podman[158200]: time="2025-12-03T19:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd2ec3e5d67eadd4a6073451db444a38b418801829e74e77be2f8c9a7b2c3169-merged.mount: Deactivated successfully.
Dec  3 19:22:29 compute-0 podman[486457]: 2025-12-03 19:22:29.792370905 +0000 UTC m=+1.234616446 container remove 43aaa38a5dabdf6cce005d03e47c203a50ba0836bc8861eef27081dc23c4a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 19:22:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43946 "" "Go-http-client/1.1"
Dec  3 19:22:29 compute-0 systemd[1]: libpod-conmon-43aaa38a5dabdf6cce005d03e47c203a50ba0836bc8861eef27081dc23c4a33b.scope: Deactivated successfully.
Dec  3 19:22:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8189 "" "Go-http-client/1.1"
Dec  3 19:22:30 compute-0 podman[486496]: 2025-12-03 19:22:30.030845324 +0000 UTC m=+0.067499428 container create 93889e0c88ea6f1084d1cb2ab725fdeffc51ebd927b023fdcf96b85c30d33c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:22:30 compute-0 podman[486496]: 2025-12-03 19:22:30.003106485 +0000 UTC m=+0.039760679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:22:30 compute-0 systemd[1]: Started libpod-conmon-93889e0c88ea6f1084d1cb2ab725fdeffc51ebd927b023fdcf96b85c30d33c31.scope.
Dec  3 19:22:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ef1de2ac42edefbff771d26516d8edf76d1a4841c687c6637e923135b55075/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ef1de2ac42edefbff771d26516d8edf76d1a4841c687c6637e923135b55075/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ef1de2ac42edefbff771d26516d8edf76d1a4841c687c6637e923135b55075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ef1de2ac42edefbff771d26516d8edf76d1a4841c687c6637e923135b55075/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:22:30 compute-0 podman[486496]: 2025-12-03 19:22:30.363217788 +0000 UTC m=+0.399871932 container init 93889e0c88ea6f1084d1cb2ab725fdeffc51ebd927b023fdcf96b85c30d33c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 19:22:30 compute-0 podman[486496]: 2025-12-03 19:22:30.381885548 +0000 UTC m=+0.418539652 container start 93889e0c88ea6f1084d1cb2ab725fdeffc51ebd927b023fdcf96b85c30d33c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 19:22:30 compute-0 podman[486496]: 2025-12-03 19:22:30.598799257 +0000 UTC m=+0.635453401 container attach 93889e0c88ea6f1084d1cb2ab725fdeffc51ebd927b023fdcf96b85c30d33c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:22:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:31 compute-0 openstack_network_exporter[365222]: ERROR   19:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:22:31 compute-0 openstack_network_exporter[365222]: ERROR   19:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:22:31 compute-0 openstack_network_exporter[365222]: ERROR   19:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:22:31 compute-0 openstack_network_exporter[365222]: ERROR   19:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:22:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:22:31 compute-0 openstack_network_exporter[365222]: ERROR   19:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:22:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:22:31 compute-0 naughty_jackson[486512]: {
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "osd_id": 1,
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "type": "bluestore"
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:    },
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "osd_id": 2,
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "type": "bluestore"
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:    },
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "osd_id": 0,
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:        "type": "bluestore"
Dec  3 19:22:31 compute-0 naughty_jackson[486512]:    }
Dec  3 19:22:31 compute-0 naughty_jackson[486512]: }
Dec  3 19:22:31 compute-0 systemd[1]: libpod-93889e0c88ea6f1084d1cb2ab725fdeffc51ebd927b023fdcf96b85c30d33c31.scope: Deactivated successfully.
Dec  3 19:22:31 compute-0 podman[486496]: 2025-12-03 19:22:31.55417997 +0000 UTC m=+1.590834064 container died 93889e0c88ea6f1084d1cb2ab725fdeffc51ebd927b023fdcf96b85c30d33c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 19:22:31 compute-0 systemd[1]: libpod-93889e0c88ea6f1084d1cb2ab725fdeffc51ebd927b023fdcf96b85c30d33c31.scope: Consumed 1.181s CPU time.
Dec  3 19:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5ef1de2ac42edefbff771d26516d8edf76d1a4841c687c6637e923135b55075-merged.mount: Deactivated successfully.
Dec  3 19:22:31 compute-0 podman[486496]: 2025-12-03 19:22:31.767103193 +0000 UTC m=+1.803757317 container remove 93889e0c88ea6f1084d1cb2ab725fdeffc51ebd927b023fdcf96b85c30d33c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 19:22:31 compute-0 podman[486545]: 2025-12-03 19:22:31.778253932 +0000 UTC m=+0.180241256 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=kepler, distribution-scope=public, version=9.4, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec  3 19:22:31 compute-0 podman[486552]: 2025-12-03 19:22:31.780028005 +0000 UTC m=+0.169872097 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 19:22:31 compute-0 systemd[1]: libpod-conmon-93889e0c88ea6f1084d1cb2ab725fdeffc51ebd927b023fdcf96b85c30d33c31.scope: Deactivated successfully.
Dec  3 19:22:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:22:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:22:31 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:22:31 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:22:31 compute-0 podman[486553]: 2025-12-03 19:22:31.847095151 +0000 UTC m=+0.227959016 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  3 19:22:31 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 6358dedc-9527-420d-b58b-29d4265b8c52 does not exist
Dec  3 19:22:31 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 8b86ba64-a7e1-47be-8f63-29f508d2cac6 does not exist
Dec  3 19:22:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:22:32 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:22:33 compute-0 nova_compute[348325]: 2025-12-03 19:22:33.905 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:34 compute-0 nova_compute[348325]: 2025-12-03 19:22:34.244 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:36 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:22:36.452 286999 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:63:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:79:bd:f4:48:1d'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 19:22:36 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:22:36.454 286999 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 19:22:36 compute-0 nova_compute[348325]: 2025-12-03 19:22:36.454 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:22:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1537118395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:22:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:22:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1537118395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:22:38 compute-0 nova_compute[348325]: 2025-12-03 19:22:38.431 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:22:38 compute-0 nova_compute[348325]: 2025-12-03 19:22:38.431 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:22:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:38 compute-0 nova_compute[348325]: 2025-12-03 19:22:38.909 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:38 compute-0 podman[486661]: 2025-12-03 19:22:38.9552515 +0000 UTC m=+0.112693247 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 19:22:39 compute-0 nova_compute[348325]: 2025-12-03 19:22:39.247 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:41 compute-0 nova_compute[348325]: 2025-12-03 19:22:41.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:22:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:42 compute-0 nova_compute[348325]: 2025-12-03 19:22:42.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:22:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:43 compute-0 nova_compute[348325]: 2025-12-03 19:22:43.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:22:43 compute-0 nova_compute[348325]: 2025-12-03 19:22:43.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:22:43 compute-0 nova_compute[348325]: 2025-12-03 19:22:43.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:22:43 compute-0 nova_compute[348325]: 2025-12-03 19:22:43.635 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 19:22:43 compute-0 nova_compute[348325]: 2025-12-03 19:22:43.636 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:22:43 compute-0 nova_compute[348325]: 2025-12-03 19:22:43.912 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:22:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:22:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:22:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:22:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:22:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:22:44 compute-0 nova_compute[348325]: 2025-12-03 19:22:44.250 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:44 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:22:44.457 286999 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1ac9fd0d-196b-4ea8-9a9a-8aa831092805, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 19:22:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:44 compute-0 podman[486687]: 2025-12-03 19:22:44.882184711 +0000 UTC m=+0.162357596 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 19:22:44 compute-0 podman[486686]: 2025-12-03 19:22:44.928289422 +0000 UTC m=+0.215489696 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 19:22:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:47 compute-0 nova_compute[348325]: 2025-12-03 19:22:47.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:22:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:48 compute-0 nova_compute[348325]: 2025-12-03 19:22:48.914 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:49 compute-0 nova_compute[348325]: 2025-12-03 19:22:49.253 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:52 compute-0 nova_compute[348325]: 2025-12-03 19:22:52.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:22:52 compute-0 nova_compute[348325]: 2025-12-03 19:22:52.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:22:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:53 compute-0 nova_compute[348325]: 2025-12-03 19:22:53.919 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:54 compute-0 nova_compute[348325]: 2025-12-03 19:22:54.256 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:54 compute-0 nova_compute[348325]: 2025-12-03 19:22:54.479 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:22:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:54 compute-0 podman[486733]: 2025-12-03 19:22:54.978146738 +0000 UTC m=+0.128947650 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 19:22:54 compute-0 podman[486734]: 2025-12-03 19:22:54.98194445 +0000 UTC m=+0.127306840 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 19:22:54 compute-0 podman[486735]: 2025-12-03 19:22:54.995334692 +0000 UTC m=+0.142316871 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, distribution-scope=public, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible)
Dec  3 19:22:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:22:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:22:58 compute-0 nova_compute[348325]: 2025-12-03 19:22:58.923 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:59 compute-0 nova_compute[348325]: 2025-12-03 19:22:59.258 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:22:59 compute-0 podman[158200]: time="2025-12-03T19:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:22:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:22:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8196 "" "Go-http-client/1.1"
Dec  3 19:23:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:01 compute-0 openstack_network_exporter[365222]: ERROR   19:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:23:01 compute-0 openstack_network_exporter[365222]: ERROR   19:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:23:01 compute-0 openstack_network_exporter[365222]: ERROR   19:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:23:01 compute-0 openstack_network_exporter[365222]: ERROR   19:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:23:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:23:01 compute-0 openstack_network_exporter[365222]: ERROR   19:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:23:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:23:01 compute-0 nova_compute[348325]: 2025-12-03 19:23:01.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:23:01 compute-0 nova_compute[348325]: 2025-12-03 19:23:01.588 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:23:01 compute-0 nova_compute[348325]: 2025-12-03 19:23:01.589 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:23:01 compute-0 nova_compute[348325]: 2025-12-03 19:23:01.589 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:23:01 compute-0 nova_compute[348325]: 2025-12-03 19:23:01.590 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:23:01 compute-0 nova_compute[348325]: 2025-12-03 19:23:01.590 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:23:01 compute-0 podman[486813]: 2025-12-03 19:23:01.965428878 +0000 UTC m=+0.119803959 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Dec  3 19:23:01 compute-0 podman[486814]: 2025-12-03 19:23:01.974020986 +0000 UTC m=+0.119935823 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:23:02 compute-0 podman[486848]: 2025-12-03 19:23:02.093982027 +0000 UTC m=+0.103150157 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:23:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:23:02 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786905912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.136 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.557 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.559 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3960MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.559 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.559 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:23:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.686 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.686 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.752 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing inventories for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.778 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating ProviderTree inventory for provider 00cd1895-22aa-49c6-bdb2-0991af662704 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.778 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Updating inventory in ProviderTree for provider 00cd1895-22aa-49c6-bdb2-0991af662704 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.802 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing aggregate associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.835 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Refreshing trait associations for resource provider 00cd1895-22aa-49c6-bdb2-0991af662704, traits: COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_AVX2,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 19:23:02 compute-0 nova_compute[348325]: 2025-12-03 19:23:02.855 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:23:03 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:23:03 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1522805263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:23:03 compute-0 nova_compute[348325]: 2025-12-03 19:23:03.613 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.758s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:23:03 compute-0 nova_compute[348325]: 2025-12-03 19:23:03.627 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:23:03 compute-0 nova_compute[348325]: 2025-12-03 19:23:03.658 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:23:03 compute-0 nova_compute[348325]: 2025-12-03 19:23:03.661 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:23:03 compute-0 nova_compute[348325]: 2025-12-03 19:23:03.662 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:23:03 compute-0 nova_compute[348325]: 2025-12-03 19:23:03.928 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:04 compute-0 nova_compute[348325]: 2025-12-03 19:23:04.262 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:08 compute-0 nova_compute[348325]: 2025-12-03 19:23:08.934 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:09 compute-0 nova_compute[348325]: 2025-12-03 19:23:09.265 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:09 compute-0 podman[486891]: 2025-12-03 19:23:09.952762742 +0000 UTC m=+0.109934781 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 19:23:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.265 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.266 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.273 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.275 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:23:13.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:23:13 compute-0 nova_compute[348325]: 2025-12-03 19:23:13.936 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:23:14
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:23:14 compute-0 nova_compute[348325]: 2025-12-03 19:23:14.267 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:23:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:23:15 compute-0 podman[486917]: 2025-12-03 19:23:15.983599168 +0000 UTC m=+0.137599598 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:23:15 compute-0 podman[486916]: 2025-12-03 19:23:15.990799642 +0000 UTC m=+0.144888664 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  3 19:23:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:18 compute-0 nova_compute[348325]: 2025-12-03 19:23:18.940 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:19 compute-0 nova_compute[348325]: 2025-12-03 19:23:19.272 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:23:23.386 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:23:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:23:23.387 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:23:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:23:23.387 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:23:23 compute-0 nova_compute[348325]: 2025-12-03 19:23:23.945 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:24 compute-0 nova_compute[348325]: 2025-12-03 19:23:24.273 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:23:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:23:25 compute-0 podman[486961]: 2025-12-03 19:23:25.94226714 +0000 UTC m=+0.100105154 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 19:23:25 compute-0 podman[486962]: 2025-12-03 19:23:25.95720613 +0000 UTC m=+0.108061806 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 19:23:25 compute-0 podman[486963]: 2025-12-03 19:23:25.991007255 +0000 UTC m=+0.136885511 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec  3 19:23:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.692521) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789807692633, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1276, "num_deletes": 255, "total_data_size": 1986716, "memory_usage": 2016840, "flush_reason": "Manual Compaction"}
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789807713204, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1957249, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51049, "largest_seqno": 52324, "table_properties": {"data_size": 1951110, "index_size": 3406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12439, "raw_average_key_size": 19, "raw_value_size": 1938931, "raw_average_value_size": 3024, "num_data_blocks": 153, "num_entries": 641, "num_filter_entries": 641, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764789675, "oldest_key_time": 1764789675, "file_creation_time": 1764789807, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 20854 microseconds, and 10664 cpu microseconds.
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.713368) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1957249 bytes OK
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.713405) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.716799) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.716814) EVENT_LOG_v1 {"time_micros": 1764789807716809, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.716839) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1980961, prev total WAL file size 1980961, number of live WAL files 2.
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.718039) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323630' seq:0, type:0; will stop at (end)
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1911KB)], [122(7474KB)]
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789807718133, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 9611024, "oldest_snapshot_seqno": -1}
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 6664 keys, 9502870 bytes, temperature: kUnknown
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789807815371, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9502870, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9459954, "index_size": 25140, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 174406, "raw_average_key_size": 26, "raw_value_size": 9341163, "raw_average_value_size": 1401, "num_data_blocks": 1001, "num_entries": 6664, "num_filter_entries": 6664, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764789807, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.815882) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9502870 bytes
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.819715) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 98.6 rd, 97.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(9.8) write-amplify(4.9) OK, records in: 7186, records dropped: 522 output_compression: NoCompression
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.819799) EVENT_LOG_v1 {"time_micros": 1764789807819770, "job": 74, "event": "compaction_finished", "compaction_time_micros": 97486, "compaction_time_cpu_micros": 45138, "output_level": 6, "num_output_files": 1, "total_output_size": 9502870, "num_input_records": 7186, "num_output_records": 6664, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789807821083, "job": 74, "event": "table_file_deletion", "file_number": 124}
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789807824992, "job": 74, "event": "table_file_deletion", "file_number": 122}
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.717754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.825292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.825301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.825305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.825308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:23:27 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:23:27.825311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:23:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:28 compute-0 nova_compute[348325]: 2025-12-03 19:23:28.951 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:29 compute-0 nova_compute[348325]: 2025-12-03 19:23:29.278 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:29 compute-0 podman[158200]: time="2025-12-03T19:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:23:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:23:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8196 "" "Go-http-client/1.1"
Dec  3 19:23:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:31 compute-0 openstack_network_exporter[365222]: ERROR   19:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:23:31 compute-0 openstack_network_exporter[365222]: ERROR   19:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:23:31 compute-0 openstack_network_exporter[365222]: ERROR   19:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:23:31 compute-0 openstack_network_exporter[365222]: ERROR   19:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:23:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:23:31 compute-0 openstack_network_exporter[365222]: ERROR   19:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:23:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:23:32 compute-0 podman[487052]: 2025-12-03 19:23:32.39963108 +0000 UTC m=+0.107254968 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 19:23:32 compute-0 podman[487053]: 2025-12-03 19:23:32.411565637 +0000 UTC m=+0.111948189 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 19:23:32 compute-0 podman[487051]: 2025-12-03 19:23:32.412261864 +0000 UTC m=+0.118689533 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, release-0.7.12=, vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Dec  3 19:23:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:23:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:23:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:23:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:23:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:23:33 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:23:33 compute-0 nova_compute[348325]: 2025-12-03 19:23:33.955 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:23:33 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev dcf1614d-b70e-47ba-8a04-908dd2c9004f does not exist
Dec  3 19:23:33 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 8166025b-fd50-45bb-b6ee-13599c05be18 does not exist
Dec  3 19:23:33 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev cc3cb394-b9b5-47a9-af5f-abfa3500f3d0 does not exist
Dec  3 19:23:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:23:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:23:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:23:33 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:23:33 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:23:33 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:23:34 compute-0 nova_compute[348325]: 2025-12-03 19:23:34.278 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:35 compute-0 podman[487350]: 2025-12-03 19:23:35.067332464 +0000 UTC m=+0.057948738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:23:35 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:23:35 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:23:35 compute-0 podman[487350]: 2025-12-03 19:23:35.783972152 +0000 UTC m=+0.774588376 container create a6b1c275aff236f839fdca8cfd986d47ce5d9dffd5661c923425795796b3fa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Dec  3 19:23:36 compute-0 systemd[1]: Started libpod-conmon-a6b1c275aff236f839fdca8cfd986d47ce5d9dffd5661c923425795796b3fa8b.scope.
Dec  3 19:23:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:23:36 compute-0 podman[487350]: 2025-12-03 19:23:36.542187731 +0000 UTC m=+1.532803995 container init a6b1c275aff236f839fdca8cfd986d47ce5d9dffd5661c923425795796b3fa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:23:36 compute-0 podman[487350]: 2025-12-03 19:23:36.551953826 +0000 UTC m=+1.542570010 container start a6b1c275aff236f839fdca8cfd986d47ce5d9dffd5661c923425795796b3fa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 19:23:36 compute-0 musing_hofstadter[487366]: 167 167
Dec  3 19:23:36 compute-0 systemd[1]: libpod-a6b1c275aff236f839fdca8cfd986d47ce5d9dffd5661c923425795796b3fa8b.scope: Deactivated successfully.
Dec  3 19:23:36 compute-0 podman[487350]: 2025-12-03 19:23:36.66117904 +0000 UTC m=+1.651795324 container attach a6b1c275aff236f839fdca8cfd986d47ce5d9dffd5661c923425795796b3fa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 19:23:36 compute-0 podman[487350]: 2025-12-03 19:23:36.661858195 +0000 UTC m=+1.652474429 container died a6b1c275aff236f839fdca8cfd986d47ce5d9dffd5661c923425795796b3fa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 19:23:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-610e33c8722561f373f31ecfd1b19505239b3f4aee2a09925e8257216d0b6e9c-merged.mount: Deactivated successfully.
Dec  3 19:23:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:23:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/281286886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:23:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:23:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/281286886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:23:37 compute-0 podman[487350]: 2025-12-03 19:23:37.873871276 +0000 UTC m=+2.864487510 container remove a6b1c275aff236f839fdca8cfd986d47ce5d9dffd5661c923425795796b3fa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:23:37 compute-0 systemd[1]: libpod-conmon-a6b1c275aff236f839fdca8cfd986d47ce5d9dffd5661c923425795796b3fa8b.scope: Deactivated successfully.
Dec  3 19:23:38 compute-0 podman[487391]: 2025-12-03 19:23:38.146237141 +0000 UTC m=+0.059700219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:23:38 compute-0 podman[487391]: 2025-12-03 19:23:38.252983746 +0000 UTC m=+0.166446734 container create a7f7efec581faa812b6e1a3598e8325b85cb28424233819c3c38081bb8096881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:23:38 compute-0 systemd[1]: Started libpod-conmon-a7f7efec581faa812b6e1a3598e8325b85cb28424233819c3c38081bb8096881.scope.
Dec  3 19:23:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55da558c9c52f76ddea720bbddf48097385469e651222d72d14838cbf2151b09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55da558c9c52f76ddea720bbddf48097385469e651222d72d14838cbf2151b09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55da558c9c52f76ddea720bbddf48097385469e651222d72d14838cbf2151b09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55da558c9c52f76ddea720bbddf48097385469e651222d72d14838cbf2151b09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55da558c9c52f76ddea720bbddf48097385469e651222d72d14838cbf2151b09/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:38 compute-0 podman[487391]: 2025-12-03 19:23:38.59832672 +0000 UTC m=+0.511789778 container init a7f7efec581faa812b6e1a3598e8325b85cb28424233819c3c38081bb8096881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 19:23:38 compute-0 podman[487391]: 2025-12-03 19:23:38.620988236 +0000 UTC m=+0.534451214 container start a7f7efec581faa812b6e1a3598e8325b85cb28424233819c3c38081bb8096881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:23:38 compute-0 nova_compute[348325]: 2025-12-03 19:23:38.663 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:23:38 compute-0 podman[487391]: 2025-12-03 19:23:38.700803721 +0000 UTC m=+0.614266719 container attach a7f7efec581faa812b6e1a3598e8325b85cb28424233819c3c38081bb8096881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:23:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:38 compute-0 nova_compute[348325]: 2025-12-03 19:23:38.959 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:39 compute-0 nova_compute[348325]: 2025-12-03 19:23:39.281 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:39 compute-0 nova_compute[348325]: 2025-12-03 19:23:39.480 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:23:39 compute-0 compassionate_meninsky[487407]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:23:39 compute-0 compassionate_meninsky[487407]: --> relative data size: 1.0
Dec  3 19:23:39 compute-0 compassionate_meninsky[487407]: --> All data devices are unavailable
Dec  3 19:23:39 compute-0 systemd[1]: libpod-a7f7efec581faa812b6e1a3598e8325b85cb28424233819c3c38081bb8096881.scope: Deactivated successfully.
Dec  3 19:23:39 compute-0 systemd[1]: libpod-a7f7efec581faa812b6e1a3598e8325b85cb28424233819c3c38081bb8096881.scope: Consumed 1.214s CPU time.
Dec  3 19:23:39 compute-0 podman[487391]: 2025-12-03 19:23:39.884441287 +0000 UTC m=+1.797904305 container died a7f7efec581faa812b6e1a3598e8325b85cb28424233819c3c38081bb8096881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-55da558c9c52f76ddea720bbddf48097385469e651222d72d14838cbf2151b09-merged.mount: Deactivated successfully.
Dec  3 19:23:40 compute-0 podman[487391]: 2025-12-03 19:23:40.247188242 +0000 UTC m=+2.160651260 container remove a7f7efec581faa812b6e1a3598e8325b85cb28424233819c3c38081bb8096881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:23:40 compute-0 podman[487449]: 2025-12-03 19:23:40.252566962 +0000 UTC m=+0.206454839 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 19:23:40 compute-0 systemd[1]: libpod-conmon-a7f7efec581faa812b6e1a3598e8325b85cb28424233819c3c38081bb8096881.scope: Deactivated successfully.
Dec  3 19:23:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:41 compute-0 nova_compute[348325]: 2025-12-03 19:23:41.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:23:42 compute-0 podman[487609]: 2025-12-03 19:23:42.554685863 +0000 UTC m=+0.111999231 container create ff673be0b08e9dfe63f2ce82c4b4f35ed76bf82c7dfbeb290301851b3f34dd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:23:42 compute-0 podman[487609]: 2025-12-03 19:23:42.483812074 +0000 UTC m=+0.041125502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:23:42 compute-0 systemd[1]: Started libpod-conmon-ff673be0b08e9dfe63f2ce82c4b4f35ed76bf82c7dfbeb290301851b3f34dd65.scope.
Dec  3 19:23:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:23:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:42 compute-0 podman[487609]: 2025-12-03 19:23:42.72215534 +0000 UTC m=+0.279468758 container init ff673be0b08e9dfe63f2ce82c4b4f35ed76bf82c7dfbeb290301851b3f34dd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 19:23:42 compute-0 podman[487609]: 2025-12-03 19:23:42.741699981 +0000 UTC m=+0.299013339 container start ff673be0b08e9dfe63f2ce82c4b4f35ed76bf82c7dfbeb290301851b3f34dd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 19:23:42 compute-0 podman[487609]: 2025-12-03 19:23:42.749659183 +0000 UTC m=+0.306972551 container attach ff673be0b08e9dfe63f2ce82c4b4f35ed76bf82c7dfbeb290301851b3f34dd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:23:42 compute-0 clever_easley[487625]: 167 167
Dec  3 19:23:42 compute-0 systemd[1]: libpod-ff673be0b08e9dfe63f2ce82c4b4f35ed76bf82c7dfbeb290301851b3f34dd65.scope: Deactivated successfully.
Dec  3 19:23:42 compute-0 podman[487609]: 2025-12-03 19:23:42.75905882 +0000 UTC m=+0.316372188 container died ff673be0b08e9dfe63f2ce82c4b4f35ed76bf82c7dfbeb290301851b3f34dd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8cb78667eb5a7bb7636109a2a0c369c9a9fa90e6773b597a90cf2c18098a86d-merged.mount: Deactivated successfully.
Dec  3 19:23:43 compute-0 podman[487609]: 2025-12-03 19:23:43.072621879 +0000 UTC m=+0.629935247 container remove ff673be0b08e9dfe63f2ce82c4b4f35ed76bf82c7dfbeb290301851b3f34dd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:23:43 compute-0 systemd[1]: libpod-conmon-ff673be0b08e9dfe63f2ce82c4b4f35ed76bf82c7dfbeb290301851b3f34dd65.scope: Deactivated successfully.
Dec  3 19:23:43 compute-0 podman[487648]: 2025-12-03 19:23:43.321380696 +0000 UTC m=+0.042213279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:23:43 compute-0 podman[487648]: 2025-12-03 19:23:43.422795111 +0000 UTC m=+0.143627714 container create e6aab04c41c4fd9c8d94ff501ea135aeee3586e89e3fd8c505477eb1ce796cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_goldstine, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 19:23:43 compute-0 nova_compute[348325]: 2025-12-03 19:23:43.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:23:43 compute-0 nova_compute[348325]: 2025-12-03 19:23:43.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:23:43 compute-0 nova_compute[348325]: 2025-12-03 19:23:43.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:23:43 compute-0 nova_compute[348325]: 2025-12-03 19:23:43.507 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 19:23:43 compute-0 nova_compute[348325]: 2025-12-03 19:23:43.508 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:23:43 compute-0 systemd[1]: Started libpod-conmon-e6aab04c41c4fd9c8d94ff501ea135aeee3586e89e3fd8c505477eb1ce796cdb.scope.
Dec  3 19:23:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117cc274ebd5752d39e68df215fb912ea4776b43ea238f91864f3435bdfe923/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117cc274ebd5752d39e68df215fb912ea4776b43ea238f91864f3435bdfe923/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117cc274ebd5752d39e68df215fb912ea4776b43ea238f91864f3435bdfe923/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117cc274ebd5752d39e68df215fb912ea4776b43ea238f91864f3435bdfe923/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:43 compute-0 podman[487648]: 2025-12-03 19:23:43.685022803 +0000 UTC m=+0.405855466 container init e6aab04c41c4fd9c8d94ff501ea135aeee3586e89e3fd8c505477eb1ce796cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:23:43 compute-0 podman[487648]: 2025-12-03 19:23:43.698932578 +0000 UTC m=+0.419765191 container start e6aab04c41c4fd9c8d94ff501ea135aeee3586e89e3fd8c505477eb1ce796cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_goldstine, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:23:43 compute-0 podman[487648]: 2025-12-03 19:23:43.800822445 +0000 UTC m=+0.521655108 container attach e6aab04c41c4fd9c8d94ff501ea135aeee3586e89e3fd8c505477eb1ce796cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 19:23:43 compute-0 nova_compute[348325]: 2025-12-03 19:23:43.965 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:23:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:23:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:23:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:23:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:23:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:23:44 compute-0 nova_compute[348325]: 2025-12-03 19:23:44.284 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]: {
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:    "0": [
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:        {
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "devices": [
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "/dev/loop3"
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            ],
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_name": "ceph_lv0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_size": "21470642176",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "name": "ceph_lv0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "tags": {
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.cluster_name": "ceph",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.crush_device_class": "",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.encrypted": "0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.osd_id": "0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.type": "block",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.vdo": "0"
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            },
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "type": "block",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "vg_name": "ceph_vg0"
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:        }
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:    ],
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:    "1": [
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:        {
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "devices": [
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "/dev/loop4"
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            ],
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_name": "ceph_lv1",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_size": "21470642176",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "name": "ceph_lv1",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "tags": {
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.cluster_name": "ceph",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.crush_device_class": "",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.encrypted": "0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.osd_id": "1",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.type": "block",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.vdo": "0"
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            },
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "type": "block",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "vg_name": "ceph_vg1"
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:        }
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:    ],
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:    "2": [
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:        {
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "devices": [
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "/dev/loop5"
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            ],
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_name": "ceph_lv2",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_size": "21470642176",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "name": "ceph_lv2",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "tags": {
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.cluster_name": "ceph",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.crush_device_class": "",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.encrypted": "0",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.osd_id": "2",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.type": "block",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:                "ceph.vdo": "0"
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            },
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "type": "block",
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:            "vg_name": "ceph_vg2"
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:        }
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]:    ]
Dec  3 19:23:44 compute-0 interesting_goldstine[487664]: }
Dec  3 19:23:44 compute-0 systemd[1]: libpod-e6aab04c41c4fd9c8d94ff501ea135aeee3586e89e3fd8c505477eb1ce796cdb.scope: Deactivated successfully.
Dec  3 19:23:44 compute-0 podman[487648]: 2025-12-03 19:23:44.508520396 +0000 UTC m=+1.229352979 container died e6aab04c41c4fd9c8d94ff501ea135aeee3586e89e3fd8c505477eb1ce796cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec  3 19:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5117cc274ebd5752d39e68df215fb912ea4776b43ea238f91864f3435bdfe923-merged.mount: Deactivated successfully.
Dec  3 19:23:44 compute-0 podman[487648]: 2025-12-03 19:23:44.681797014 +0000 UTC m=+1.402629607 container remove e6aab04c41c4fd9c8d94ff501ea135aeee3586e89e3fd8c505477eb1ce796cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_goldstine, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 19:23:44 compute-0 systemd[1]: libpod-conmon-e6aab04c41c4fd9c8d94ff501ea135aeee3586e89e3fd8c505477eb1ce796cdb.scope: Deactivated successfully.
Dec  3 19:23:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:45 compute-0 nova_compute[348325]: 2025-12-03 19:23:45.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:23:45 compute-0 podman[487822]: 2025-12-03 19:23:45.743425128 +0000 UTC m=+0.086788584 container create 677b14fab7f0a5843afd490ffa37073065b9d4310e24ae28d8c905bdcc6d2159 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 19:23:45 compute-0 podman[487822]: 2025-12-03 19:23:45.708346692 +0000 UTC m=+0.051710208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:23:45 compute-0 systemd[1]: Started libpod-conmon-677b14fab7f0a5843afd490ffa37073065b9d4310e24ae28d8c905bdcc6d2159.scope.
Dec  3 19:23:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:23:45 compute-0 podman[487822]: 2025-12-03 19:23:45.888058695 +0000 UTC m=+0.231422201 container init 677b14fab7f0a5843afd490ffa37073065b9d4310e24ae28d8c905bdcc6d2159 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 19:23:45 compute-0 podman[487822]: 2025-12-03 19:23:45.912618857 +0000 UTC m=+0.255982323 container start 677b14fab7f0a5843afd490ffa37073065b9d4310e24ae28d8c905bdcc6d2159 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:23:45 compute-0 thirsty_roentgen[487836]: 167 167
Dec  3 19:23:45 compute-0 systemd[1]: libpod-677b14fab7f0a5843afd490ffa37073065b9d4310e24ae28d8c905bdcc6d2159.scope: Deactivated successfully.
Dec  3 19:23:45 compute-0 conmon[487836]: conmon 677b14fab7f0a5843afd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-677b14fab7f0a5843afd490ffa37073065b9d4310e24ae28d8c905bdcc6d2159.scope/container/memory.events
Dec  3 19:23:45 compute-0 podman[487822]: 2025-12-03 19:23:45.926945212 +0000 UTC m=+0.270308668 container attach 677b14fab7f0a5843afd490ffa37073065b9d4310e24ae28d8c905bdcc6d2159 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_roentgen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:23:45 compute-0 podman[487822]: 2025-12-03 19:23:45.927552277 +0000 UTC m=+0.270915723 container died 677b14fab7f0a5843afd490ffa37073065b9d4310e24ae28d8c905bdcc6d2159 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_roentgen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:23:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-66386810661b4d979e261adc9349d58efbf31448a2b69812a738276f3a9ec0eb-merged.mount: Deactivated successfully.
Dec  3 19:23:45 compute-0 podman[487822]: 2025-12-03 19:23:45.99734244 +0000 UTC m=+0.340705866 container remove 677b14fab7f0a5843afd490ffa37073065b9d4310e24ae28d8c905bdcc6d2159 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_roentgen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 19:23:46 compute-0 systemd[1]: libpod-conmon-677b14fab7f0a5843afd490ffa37073065b9d4310e24ae28d8c905bdcc6d2159.scope: Deactivated successfully.
Dec  3 19:23:46 compute-0 podman[487857]: 2025-12-03 19:23:46.134921207 +0000 UTC m=+0.088428553 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec  3 19:23:46 compute-0 podman[487856]: 2025-12-03 19:23:46.176773596 +0000 UTC m=+0.135351985 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:23:46 compute-0 podman[487895]: 2025-12-03 19:23:46.205314233 +0000 UTC m=+0.066016872 container create 1eb74ba9da688568c20554ef104942dbe0c90714de7f7a7d024ff8d076b788c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 19:23:46 compute-0 systemd[1]: Started libpod-conmon-1eb74ba9da688568c20554ef104942dbe0c90714de7f7a7d024ff8d076b788c7.scope.
Dec  3 19:23:46 compute-0 podman[487895]: 2025-12-03 19:23:46.181391137 +0000 UTC m=+0.042093806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:23:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80aac432d9ea0deabdc3fbc949f464248402d02728214d6aaa9c73470a02700/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80aac432d9ea0deabdc3fbc949f464248402d02728214d6aaa9c73470a02700/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80aac432d9ea0deabdc3fbc949f464248402d02728214d6aaa9c73470a02700/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80aac432d9ea0deabdc3fbc949f464248402d02728214d6aaa9c73470a02700/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:23:46 compute-0 podman[487895]: 2025-12-03 19:23:46.373744124 +0000 UTC m=+0.234446783 container init 1eb74ba9da688568c20554ef104942dbe0c90714de7f7a7d024ff8d076b788c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 19:23:46 compute-0 podman[487895]: 2025-12-03 19:23:46.392554058 +0000 UTC m=+0.253256697 container start 1eb74ba9da688568c20554ef104942dbe0c90714de7f7a7d024ff8d076b788c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:23:46 compute-0 podman[487895]: 2025-12-03 19:23:46.39889947 +0000 UTC m=+0.259602119 container attach 1eb74ba9da688568c20554ef104942dbe0c90714de7f7a7d024ff8d076b788c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 19:23:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:47 compute-0 sad_pascal[487919]: {
Dec  3 19:23:47 compute-0 sad_pascal[487919]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "osd_id": 1,
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "type": "bluestore"
Dec  3 19:23:47 compute-0 sad_pascal[487919]:    },
Dec  3 19:23:47 compute-0 sad_pascal[487919]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "osd_id": 2,
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "type": "bluestore"
Dec  3 19:23:47 compute-0 sad_pascal[487919]:    },
Dec  3 19:23:47 compute-0 sad_pascal[487919]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "osd_id": 0,
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:23:47 compute-0 sad_pascal[487919]:        "type": "bluestore"
Dec  3 19:23:47 compute-0 sad_pascal[487919]:    }
Dec  3 19:23:47 compute-0 sad_pascal[487919]: }
Dec  3 19:23:47 compute-0 systemd[1]: libpod-1eb74ba9da688568c20554ef104942dbe0c90714de7f7a7d024ff8d076b788c7.scope: Deactivated successfully.
Dec  3 19:23:47 compute-0 systemd[1]: libpod-1eb74ba9da688568c20554ef104942dbe0c90714de7f7a7d024ff8d076b788c7.scope: Consumed 1.292s CPU time.
Dec  3 19:23:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:47 compute-0 podman[487952]: 2025-12-03 19:23:47.790833449 +0000 UTC m=+0.066574976 container died 1eb74ba9da688568c20554ef104942dbe0c90714de7f7a7d024ff8d076b788c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 19:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a80aac432d9ea0deabdc3fbc949f464248402d02728214d6aaa9c73470a02700-merged.mount: Deactivated successfully.
Dec  3 19:23:47 compute-0 podman[487952]: 2025-12-03 19:23:47.909729165 +0000 UTC m=+0.185470622 container remove 1eb74ba9da688568c20554ef104942dbe0c90714de7f7a7d024ff8d076b788c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:23:47 compute-0 systemd[1]: libpod-conmon-1eb74ba9da688568c20554ef104942dbe0c90714de7f7a7d024ff8d076b788c7.scope: Deactivated successfully.
Dec  3 19:23:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:23:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:23:48 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:23:48 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:23:48 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 966415cc-8d36-49fa-be33-242757c132dc does not exist
Dec  3 19:23:48 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 206e6bb2-9c19-48ff-b389-8fbb6e7446e3 does not exist
Dec  3 19:23:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:48 compute-0 nova_compute[348325]: 2025-12-03 19:23:48.969 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:23:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:23:49 compute-0 nova_compute[348325]: 2025-12-03 19:23:49.287 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:49 compute-0 nova_compute[348325]: 2025-12-03 19:23:49.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:23:49 compute-0 nova_compute[348325]: 2025-12-03 19:23:49.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:23:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:53 compute-0 nova_compute[348325]: 2025-12-03 19:23:53.972 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:54 compute-0 nova_compute[348325]: 2025-12-03 19:23:54.291 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:54 compute-0 nova_compute[348325]: 2025-12-03 19:23:54.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:23:54 compute-0 nova_compute[348325]: 2025-12-03 19:23:54.486 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:23:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:56 compute-0 podman[488017]: 2025-12-03 19:23:56.942095846 +0000 UTC m=+0.091718932 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 19:23:56 compute-0 podman[488016]: 2025-12-03 19:23:56.942332172 +0000 UTC m=+0.095497934 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 19:23:56 compute-0 podman[488018]: 2025-12-03 19:23:56.966833012 +0000 UTC m=+0.121341456 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec  3 19:23:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:23:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:23:58 compute-0 nova_compute[348325]: 2025-12-03 19:23:58.975 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:59 compute-0 nova_compute[348325]: 2025-12-03 19:23:59.294 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:23:59 compute-0 podman[158200]: time="2025-12-03T19:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:23:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:23:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8194 "" "Go-http-client/1.1"
Dec  3 19:24:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:01 compute-0 openstack_network_exporter[365222]: ERROR   19:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:24:01 compute-0 openstack_network_exporter[365222]: ERROR   19:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:24:01 compute-0 openstack_network_exporter[365222]: ERROR   19:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:24:01 compute-0 openstack_network_exporter[365222]: ERROR   19:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:24:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:24:01 compute-0 openstack_network_exporter[365222]: ERROR   19:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:24:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:24:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:02 compute-0 podman[488075]: 2025-12-03 19:24:02.968181447 +0000 UTC m=+0.130309073 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, release-0.7.12=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec  3 19:24:02 compute-0 podman[488077]: 2025-12-03 19:24:02.982361329 +0000 UTC m=+0.127759621 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Dec  3 19:24:02 compute-0 podman[488076]: 2025-12-03 19:24:02.98694188 +0000 UTC m=+0.142670911 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 19:24:03 compute-0 nova_compute[348325]: 2025-12-03 19:24:03.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:03 compute-0 nova_compute[348325]: 2025-12-03 19:24:03.538 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:24:03 compute-0 nova_compute[348325]: 2025-12-03 19:24:03.539 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:24:03 compute-0 nova_compute[348325]: 2025-12-03 19:24:03.539 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:24:03 compute-0 nova_compute[348325]: 2025-12-03 19:24:03.540 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:24:03 compute-0 nova_compute[348325]: 2025-12-03 19:24:03.540 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:24:03 compute-0 nova_compute[348325]: 2025-12-03 19:24:03.980 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:04 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:24:04 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2325387537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:24:04 compute-0 nova_compute[348325]: 2025-12-03 19:24:04.040 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:24:04 compute-0 nova_compute[348325]: 2025-12-03 19:24:04.296 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:04 compute-0 nova_compute[348325]: 2025-12-03 19:24:04.476 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:24:04 compute-0 nova_compute[348325]: 2025-12-03 19:24:04.477 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3933MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:24:04 compute-0 nova_compute[348325]: 2025-12-03 19:24:04.477 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:24:04 compute-0 nova_compute[348325]: 2025-12-03 19:24:04.478 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:24:04 compute-0 nova_compute[348325]: 2025-12-03 19:24:04.555 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:24:04 compute-0 nova_compute[348325]: 2025-12-03 19:24:04.556 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:24:04 compute-0 nova_compute[348325]: 2025-12-03 19:24:04.588 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:24:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:05 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:24:05 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3973769382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:24:05 compute-0 nova_compute[348325]: 2025-12-03 19:24:05.093 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:24:05 compute-0 nova_compute[348325]: 2025-12-03 19:24:05.106 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:24:05 compute-0 nova_compute[348325]: 2025-12-03 19:24:05.122 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:24:05 compute-0 nova_compute[348325]: 2025-12-03 19:24:05.125 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:24:05 compute-0 nova_compute[348325]: 2025-12-03 19:24:05.125 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:24:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:08 compute-0 nova_compute[348325]: 2025-12-03 19:24:08.984 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:09 compute-0 nova_compute[348325]: 2025-12-03 19:24:09.298 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:10 compute-0 podman[488177]: 2025-12-03 19:24:10.964300863 +0000 UTC m=+0.121062979 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:24:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:13 compute-0 nova_compute[348325]: 2025-12-03 19:24:13.994 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:24:14
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['volumes', '.rgw.root', 'images', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log']
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:24:14 compute-0 nova_compute[348325]: 2025-12-03 19:24:14.301 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:24:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:24:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:16 compute-0 podman[488203]: 2025-12-03 19:24:16.959604872 +0000 UTC m=+0.111910839 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:24:16 compute-0 podman[488202]: 2025-12-03 19:24:16.964639783 +0000 UTC m=+0.130419225 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  3 19:24:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:19 compute-0 nova_compute[348325]: 2025-12-03 19:24:18.999 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:19 compute-0 nova_compute[348325]: 2025-12-03 19:24:19.303 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:24:19 compute-0 ceph-osd[206694]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 3118 syncs, 3.52 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 390 writes, 898 keys, 390 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 390 writes, 183 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:24:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:24:23.388 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:24:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:24:23.389 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:24:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:24:23.389 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:24:24 compute-0 nova_compute[348325]: 2025-12-03 19:24:24.005 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:24 compute-0 nova_compute[348325]: 2025-12-03 19:24:24.306 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:24:24 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:24:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:27 compute-0 podman[488246]: 2025-12-03 19:24:27.998639315 +0000 UTC m=+0.142905827 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  3 19:24:28 compute-0 podman[488247]: 2025-12-03 19:24:28.005997692 +0000 UTC m=+0.150264944 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:24:28 compute-0 podman[488248]: 2025-12-03 19:24:28.006198467 +0000 UTC m=+0.138544491 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, version=9.6)
Dec  3 19:24:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:24:28 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3159 syncs, 3.53 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 423 writes, 1096 keys, 423 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 423 writes, 198 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:24:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:29 compute-0 nova_compute[348325]: 2025-12-03 19:24:29.009 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:29 compute-0 nova_compute[348325]: 2025-12-03 19:24:29.308 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:29 compute-0 podman[158200]: time="2025-12-03T19:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:24:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:24:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8197 "" "Go-http-client/1.1"
Dec  3 19:24:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:31 compute-0 openstack_network_exporter[365222]: ERROR   19:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:24:31 compute-0 openstack_network_exporter[365222]: ERROR   19:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:24:31 compute-0 openstack_network_exporter[365222]: ERROR   19:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:24:31 compute-0 openstack_network_exporter[365222]: ERROR   19:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:24:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:24:31 compute-0 openstack_network_exporter[365222]: ERROR   19:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:24:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:24:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:33 compute-0 podman[488310]: 2025-12-03 19:24:33.943882206 +0000 UTC m=+0.105345791 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, release-0.7.12=, version=9.4, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, build-date=2024-09-18T21:23:30, vcs-type=git, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, io.openshift.tags=base rhel9)
Dec  3 19:24:33 compute-0 podman[488311]: 2025-12-03 19:24:33.958423807 +0000 UTC m=+0.104510901 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 19:24:33 compute-0 podman[488312]: 2025-12-03 19:24:33.964225576 +0000 UTC m=+0.112957923 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 19:24:34 compute-0 nova_compute[348325]: 2025-12-03 19:24:34.014 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:34 compute-0 nova_compute[348325]: 2025-12-03 19:24:34.312 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:24:34 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2882 syncs, 3.60 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 375 writes, 874 keys, 375 commit groups, 1.0 writes per commit group, ingest: 0.36 MB, 0.00 MB/s#012Interval WAL: 375 writes, 173 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:24:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:24:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1544915196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:24:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:24:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1544915196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:24:37 compute-0 ceph-mgr[193091]: [devicehealth INFO root] Check health
Dec  3 19:24:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:39 compute-0 nova_compute[348325]: 2025-12-03 19:24:39.018 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:39 compute-0 nova_compute[348325]: 2025-12-03 19:24:39.125 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:39 compute-0 nova_compute[348325]: 2025-12-03 19:24:39.318 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:39 compute-0 nova_compute[348325]: 2025-12-03 19:24:39.478 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:41 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:41 compute-0 podman[488369]: 2025-12-03 19:24:41.970254811 +0000 UTC m=+0.130767894 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 19:24:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:43 compute-0 nova_compute[348325]: 2025-12-03 19:24:43.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:43 compute-0 nova_compute[348325]: 2025-12-03 19:24:43.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:24:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:24:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:24:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:24:44 compute-0 nova_compute[348325]: 2025-12-03 19:24:44.023 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:24:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:24:44 compute-0 nova_compute[348325]: 2025-12-03 19:24:44.322 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:45 compute-0 nova_compute[348325]: 2025-12-03 19:24:45.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:45 compute-0 nova_compute[348325]: 2025-12-03 19:24:45.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:24:45 compute-0 nova_compute[348325]: 2025-12-03 19:24:45.488 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:24:45 compute-0 nova_compute[348325]: 2025-12-03 19:24:45.510 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 19:24:45 compute-0 nova_compute[348325]: 2025-12-03 19:24:45.511 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:47 compute-0 podman[488394]: 2025-12-03 19:24:47.973053221 +0000 UTC m=+0.119574524 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 19:24:48 compute-0 podman[488393]: 2025-12-03 19:24:48.027623797 +0000 UTC m=+0.181525948 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 19:24:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:49 compute-0 nova_compute[348325]: 2025-12-03 19:24:49.027 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:24:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:24:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:24:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:24:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:24:49 compute-0 nova_compute[348325]: 2025-12-03 19:24:49.325 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:24:49 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 78ae50f0-df6d-468e-809e-540be06ec4eb does not exist
Dec  3 19:24:49 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 101fc8b9-1e1a-47ef-8b45-be19eb25f343 does not exist
Dec  3 19:24:49 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 3dbf0521-491d-4d05-b8b0-a8faa591d0d5 does not exist
Dec  3 19:24:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:24:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:24:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:24:49 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:24:49 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:24:49 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:24:49 compute-0 nova_compute[348325]: 2025-12-03 19:24:49.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:24:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:24:49 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:24:50 compute-0 podman[488710]: 2025-12-03 19:24:50.45968174 +0000 UTC m=+0.079735053 container create 443ee261fd552eb6d18fc4e89b459f7c1c88e2cfab5e739b47f09df601864c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 19:24:50 compute-0 podman[488710]: 2025-12-03 19:24:50.423703343 +0000 UTC m=+0.043756706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:24:50 compute-0 systemd[1]: Started libpod-conmon-443ee261fd552eb6d18fc4e89b459f7c1c88e2cfab5e739b47f09df601864c10.scope.
Dec  3 19:24:50 compute-0 nova_compute[348325]: 2025-12-03 19:24:50.560 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:24:50 compute-0 podman[488710]: 2025-12-03 19:24:50.639625098 +0000 UTC m=+0.259678451 container init 443ee261fd552eb6d18fc4e89b459f7c1c88e2cfab5e739b47f09df601864c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:24:50 compute-0 podman[488710]: 2025-12-03 19:24:50.657909099 +0000 UTC m=+0.277962392 container start 443ee261fd552eb6d18fc4e89b459f7c1c88e2cfab5e739b47f09df601864c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 19:24:50 compute-0 podman[488710]: 2025-12-03 19:24:50.664352095 +0000 UTC m=+0.284405388 container attach 443ee261fd552eb6d18fc4e89b459f7c1c88e2cfab5e739b47f09df601864c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:24:50 compute-0 charming_wiles[488726]: 167 167
Dec  3 19:24:50 compute-0 systemd[1]: libpod-443ee261fd552eb6d18fc4e89b459f7c1c88e2cfab5e739b47f09df601864c10.scope: Deactivated successfully.
Dec  3 19:24:50 compute-0 podman[488710]: 2025-12-03 19:24:50.672657915 +0000 UTC m=+0.292711208 container died 443ee261fd552eb6d18fc4e89b459f7c1c88e2cfab5e739b47f09df601864c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:24:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1791eee492a71778873ffc4bec264726eb1ccfdbe1228b5518f1ef3b287dbad9-merged.mount: Deactivated successfully.
Dec  3 19:24:50 compute-0 podman[488710]: 2025-12-03 19:24:50.729987337 +0000 UTC m=+0.350040620 container remove 443ee261fd552eb6d18fc4e89b459f7c1c88e2cfab5e739b47f09df601864c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 19:24:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:50 compute-0 systemd[1]: libpod-conmon-443ee261fd552eb6d18fc4e89b459f7c1c88e2cfab5e739b47f09df601864c10.scope: Deactivated successfully.
Dec  3 19:24:51 compute-0 podman[488748]: 2025-12-03 19:24:51.017878947 +0000 UTC m=+0.091986828 container create c7dab3cbab95aa30e4de25bdff7c1a7796f10cb7ef2559e4d6f6c4221ba4ce1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 19:24:51 compute-0 podman[488748]: 2025-12-03 19:24:50.978938178 +0000 UTC m=+0.053046119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:24:51 compute-0 systemd[1]: Started libpod-conmon-c7dab3cbab95aa30e4de25bdff7c1a7796f10cb7ef2559e4d6f6c4221ba4ce1e.scope.
Dec  3 19:24:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/854f96b94861df908c7c68e24092844bfac8857f30ffe356bd120951972a24d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/854f96b94861df908c7c68e24092844bfac8857f30ffe356bd120951972a24d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/854f96b94861df908c7c68e24092844bfac8857f30ffe356bd120951972a24d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/854f96b94861df908c7c68e24092844bfac8857f30ffe356bd120951972a24d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/854f96b94861df908c7c68e24092844bfac8857f30ffe356bd120951972a24d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:51 compute-0 podman[488748]: 2025-12-03 19:24:51.18183024 +0000 UTC m=+0.255938121 container init c7dab3cbab95aa30e4de25bdff7c1a7796f10cb7ef2559e4d6f6c4221ba4ce1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 19:24:51 compute-0 podman[488748]: 2025-12-03 19:24:51.203201106 +0000 UTC m=+0.277308957 container start c7dab3cbab95aa30e4de25bdff7c1a7796f10cb7ef2559e4d6f6c4221ba4ce1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 19:24:51 compute-0 podman[488748]: 2025-12-03 19:24:51.209330663 +0000 UTC m=+0.283438554 container attach c7dab3cbab95aa30e4de25bdff7c1a7796f10cb7ef2559e4d6f6c4221ba4ce1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 19:24:51 compute-0 nova_compute[348325]: 2025-12-03 19:24:51.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:51 compute-0 nova_compute[348325]: 2025-12-03 19:24:51.489 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 19:24:51 compute-0 nova_compute[348325]: 2025-12-03 19:24:51.810 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 19:24:52 compute-0 focused_swirles[488763]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:24:52 compute-0 focused_swirles[488763]: --> relative data size: 1.0
Dec  3 19:24:52 compute-0 focused_swirles[488763]: --> All data devices are unavailable
Dec  3 19:24:52 compute-0 systemd[1]: libpod-c7dab3cbab95aa30e4de25bdff7c1a7796f10cb7ef2559e4d6f6c4221ba4ce1e.scope: Deactivated successfully.
Dec  3 19:24:52 compute-0 systemd[1]: libpod-c7dab3cbab95aa30e4de25bdff7c1a7796f10cb7ef2559e4d6f6c4221ba4ce1e.scope: Consumed 1.203s CPU time.
Dec  3 19:24:52 compute-0 podman[488748]: 2025-12-03 19:24:52.445610788 +0000 UTC m=+1.519718699 container died c7dab3cbab95aa30e4de25bdff7c1a7796f10cb7ef2559e4d6f6c4221ba4ce1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 19:24:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-854f96b94861df908c7c68e24092844bfac8857f30ffe356bd120951972a24d3-merged.mount: Deactivated successfully.
Dec  3 19:24:52 compute-0 podman[488748]: 2025-12-03 19:24:52.540963617 +0000 UTC m=+1.615071458 container remove c7dab3cbab95aa30e4de25bdff7c1a7796f10cb7ef2559e4d6f6c4221ba4ce1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 19:24:52 compute-0 systemd[1]: libpod-conmon-c7dab3cbab95aa30e4de25bdff7c1a7796f10cb7ef2559e4d6f6c4221ba4ce1e.scope: Deactivated successfully.
Dec  3 19:24:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:53 compute-0 podman[488942]: 2025-12-03 19:24:53.552195266 +0000 UTC m=+0.042471755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:24:53 compute-0 podman[488942]: 2025-12-03 19:24:53.733813384 +0000 UTC m=+0.224089823 container create 2c4acc2a982dee99b64e56ebc319ceae0c0c9d99bbb29b5b2def37ffadb4aacf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 19:24:53 compute-0 systemd[1]: Started libpod-conmon-2c4acc2a982dee99b64e56ebc319ceae0c0c9d99bbb29b5b2def37ffadb4aacf.scope.
Dec  3 19:24:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:24:53 compute-0 podman[488942]: 2025-12-03 19:24:53.903761801 +0000 UTC m=+0.394038260 container init 2c4acc2a982dee99b64e56ebc319ceae0c0c9d99bbb29b5b2def37ffadb4aacf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:24:53 compute-0 podman[488942]: 2025-12-03 19:24:53.92280742 +0000 UTC m=+0.413083810 container start 2c4acc2a982dee99b64e56ebc319ceae0c0c9d99bbb29b5b2def37ffadb4aacf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:24:53 compute-0 podman[488942]: 2025-12-03 19:24:53.931329616 +0000 UTC m=+0.421606115 container attach 2c4acc2a982dee99b64e56ebc319ceae0c0c9d99bbb29b5b2def37ffadb4aacf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 19:24:53 compute-0 trusting_pare[488957]: 167 167
Dec  3 19:24:53 compute-0 systemd[1]: libpod-2c4acc2a982dee99b64e56ebc319ceae0c0c9d99bbb29b5b2def37ffadb4aacf.scope: Deactivated successfully.
Dec  3 19:24:53 compute-0 podman[488942]: 2025-12-03 19:24:53.936265085 +0000 UTC m=+0.426541484 container died 2c4acc2a982dee99b64e56ebc319ceae0c0c9d99bbb29b5b2def37ffadb4aacf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  3 19:24:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddee5267973b063d9f8519ff02dcd1de0fa81115be92e9f70e4263a8e61f34d6-merged.mount: Deactivated successfully.
Dec  3 19:24:54 compute-0 podman[488942]: 2025-12-03 19:24:53.999752286 +0000 UTC m=+0.490028685 container remove 2c4acc2a982dee99b64e56ebc319ceae0c0c9d99bbb29b5b2def37ffadb4aacf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:24:54 compute-0 systemd[1]: libpod-conmon-2c4acc2a982dee99b64e56ebc319ceae0c0c9d99bbb29b5b2def37ffadb4aacf.scope: Deactivated successfully.
Dec  3 19:24:54 compute-0 nova_compute[348325]: 2025-12-03 19:24:54.030 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:54 compute-0 nova_compute[348325]: 2025-12-03 19:24:54.327 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:54 compute-0 podman[488981]: 2025-12-03 19:24:54.238389258 +0000 UTC m=+0.059005273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:24:54 compute-0 podman[488981]: 2025-12-03 19:24:54.435933122 +0000 UTC m=+0.256549077 container create 047b716a4f0a5f41bfdfdcc20c457f4cc4799b30882b65f134fcfe6539dca47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:24:54 compute-0 nova_compute[348325]: 2025-12-03 19:24:54.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:54 compute-0 nova_compute[348325]: 2025-12-03 19:24:54.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:24:54 compute-0 nova_compute[348325]: 2025-12-03 19:24:54.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:54 compute-0 nova_compute[348325]: 2025-12-03 19:24:54.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 19:24:54 compute-0 systemd[1]: Started libpod-conmon-047b716a4f0a5f41bfdfdcc20c457f4cc4799b30882b65f134fcfe6539dca47e.scope.
Dec  3 19:24:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b729b925793154963e79156c833158e39afe18951c277ce69ac6c5d3abc02e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b729b925793154963e79156c833158e39afe18951c277ce69ac6c5d3abc02e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b729b925793154963e79156c833158e39afe18951c277ce69ac6c5d3abc02e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b729b925793154963e79156c833158e39afe18951c277ce69ac6c5d3abc02e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:54 compute-0 podman[488981]: 2025-12-03 19:24:54.605196441 +0000 UTC m=+0.425812396 container init 047b716a4f0a5f41bfdfdcc20c457f4cc4799b30882b65f134fcfe6539dca47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:24:54 compute-0 podman[488981]: 2025-12-03 19:24:54.614320412 +0000 UTC m=+0.434936337 container start 047b716a4f0a5f41bfdfdcc20c457f4cc4799b30882b65f134fcfe6539dca47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 19:24:54 compute-0 podman[488981]: 2025-12-03 19:24:54.619429684 +0000 UTC m=+0.440045639 container attach 047b716a4f0a5f41bfdfdcc20c457f4cc4799b30882b65f134fcfe6539dca47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 19:24:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]: {
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:    "0": [
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:        {
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "devices": [
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "/dev/loop3"
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            ],
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_name": "ceph_lv0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_size": "21470642176",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "name": "ceph_lv0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "tags": {
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.cluster_name": "ceph",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.crush_device_class": "",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.encrypted": "0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.osd_id": "0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.type": "block",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.vdo": "0"
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            },
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "type": "block",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "vg_name": "ceph_vg0"
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:        }
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:    ],
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:    "1": [
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:        {
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "devices": [
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "/dev/loop4"
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            ],
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_name": "ceph_lv1",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_size": "21470642176",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "name": "ceph_lv1",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "tags": {
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.cluster_name": "ceph",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.crush_device_class": "",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.encrypted": "0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.osd_id": "1",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.type": "block",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.vdo": "0"
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            },
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "type": "block",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "vg_name": "ceph_vg1"
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:        }
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:    ],
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:    "2": [
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:        {
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "devices": [
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "/dev/loop5"
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            ],
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_name": "ceph_lv2",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_size": "21470642176",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "name": "ceph_lv2",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "tags": {
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.cluster_name": "ceph",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.crush_device_class": "",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.encrypted": "0",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.osd_id": "2",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.type": "block",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:                "ceph.vdo": "0"
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            },
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "type": "block",
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:            "vg_name": "ceph_vg2"
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:        }
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]:    ]
Dec  3 19:24:55 compute-0 stupefied_maxwell[488997]: }
Dec  3 19:24:55 compute-0 systemd[1]: libpod-047b716a4f0a5f41bfdfdcc20c457f4cc4799b30882b65f134fcfe6539dca47e.scope: Deactivated successfully.
Dec  3 19:24:55 compute-0 podman[488981]: 2025-12-03 19:24:55.489336137 +0000 UTC m=+1.309952082 container died 047b716a4f0a5f41bfdfdcc20c457f4cc4799b30882b65f134fcfe6539dca47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:24:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b729b925793154963e79156c833158e39afe18951c277ce69ac6c5d3abc02e5-merged.mount: Deactivated successfully.
Dec  3 19:24:55 compute-0 podman[488981]: 2025-12-03 19:24:55.594279457 +0000 UTC m=+1.414895402 container remove 047b716a4f0a5f41bfdfdcc20c457f4cc4799b30882b65f134fcfe6539dca47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 19:24:55 compute-0 systemd[1]: libpod-conmon-047b716a4f0a5f41bfdfdcc20c457f4cc4799b30882b65f134fcfe6539dca47e.scope: Deactivated successfully.
Dec  3 19:24:56 compute-0 nova_compute[348325]: 2025-12-03 19:24:56.498 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:56 compute-0 podman[489156]: 2025-12-03 19:24:56.668220128 +0000 UTC m=+0.079579910 container create ef2c1d1141c1780f472586d16336dc94b7e1d61de0d99e7d34e54a47f0aac70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:24:56 compute-0 podman[489156]: 2025-12-03 19:24:56.63387246 +0000 UTC m=+0.045232292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:24:56 compute-0 systemd[1]: Started libpod-conmon-ef2c1d1141c1780f472586d16336dc94b7e1d61de0d99e7d34e54a47f0aac70b.scope.
Dec  3 19:24:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:24:56 compute-0 podman[489156]: 2025-12-03 19:24:56.824346612 +0000 UTC m=+0.235706404 container init ef2c1d1141c1780f472586d16336dc94b7e1d61de0d99e7d34e54a47f0aac70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 19:24:56 compute-0 podman[489156]: 2025-12-03 19:24:56.83753739 +0000 UTC m=+0.248897142 container start ef2c1d1141c1780f472586d16336dc94b7e1d61de0d99e7d34e54a47f0aac70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:24:56 compute-0 podman[489156]: 2025-12-03 19:24:56.844133129 +0000 UTC m=+0.255492881 container attach ef2c1d1141c1780f472586d16336dc94b7e1d61de0d99e7d34e54a47f0aac70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 19:24:56 compute-0 vibrant_sammet[489172]: 167 167
Dec  3 19:24:56 compute-0 systemd[1]: libpod-ef2c1d1141c1780f472586d16336dc94b7e1d61de0d99e7d34e54a47f0aac70b.scope: Deactivated successfully.
Dec  3 19:24:56 compute-0 podman[489156]: 2025-12-03 19:24:56.846893815 +0000 UTC m=+0.258253567 container died ef2c1d1141c1780f472586d16336dc94b7e1d61de0d99e7d34e54a47f0aac70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 19:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1024225bc31c8b0bfb683a627a811d9c778f994650a5326dcc6a41d6faa1f8b8-merged.mount: Deactivated successfully.
Dec  3 19:24:56 compute-0 podman[489156]: 2025-12-03 19:24:56.920725905 +0000 UTC m=+0.332085647 container remove ef2c1d1141c1780f472586d16336dc94b7e1d61de0d99e7d34e54a47f0aac70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:24:56 compute-0 systemd[1]: libpod-conmon-ef2c1d1141c1780f472586d16336dc94b7e1d61de0d99e7d34e54a47f0aac70b.scope: Deactivated successfully.
Dec  3 19:24:57 compute-0 podman[489194]: 2025-12-03 19:24:57.152960324 +0000 UTC m=+0.087702915 container create dd86eb7852a85f5033d89af32e49bff5771d9e7f32026c2b6eac601ed3527323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 19:24:57 compute-0 podman[489194]: 2025-12-03 19:24:57.115216384 +0000 UTC m=+0.049959055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:24:57 compute-0 systemd[1]: Started libpod-conmon-dd86eb7852a85f5033d89af32e49bff5771d9e7f32026c2b6eac601ed3527323.scope.
Dec  3 19:24:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35eb1187c87c83d647ed1e0db7dc1c59f24b6dc471d604c5dcf662bcd7242a4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35eb1187c87c83d647ed1e0db7dc1c59f24b6dc471d604c5dcf662bcd7242a4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35eb1187c87c83d647ed1e0db7dc1c59f24b6dc471d604c5dcf662bcd7242a4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35eb1187c87c83d647ed1e0db7dc1c59f24b6dc471d604c5dcf662bcd7242a4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:24:57 compute-0 podman[489194]: 2025-12-03 19:24:57.343403256 +0000 UTC m=+0.278145927 container init dd86eb7852a85f5033d89af32e49bff5771d9e7f32026c2b6eac601ed3527323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_darwin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 19:24:57 compute-0 podman[489194]: 2025-12-03 19:24:57.389825995 +0000 UTC m=+0.324568606 container start dd86eb7852a85f5033d89af32e49bff5771d9e7f32026c2b6eac601ed3527323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_darwin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:24:57 compute-0 podman[489194]: 2025-12-03 19:24:57.396800923 +0000 UTC m=+0.331543534 container attach dd86eb7852a85f5033d89af32e49bff5771d9e7f32026c2b6eac601ed3527323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_darwin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  3 19:24:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:24:58 compute-0 nova_compute[348325]: 2025-12-03 19:24:58.488 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:24:58 compute-0 focused_darwin[489210]: {
Dec  3 19:24:58 compute-0 focused_darwin[489210]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "osd_id": 1,
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "type": "bluestore"
Dec  3 19:24:58 compute-0 focused_darwin[489210]:    },
Dec  3 19:24:58 compute-0 focused_darwin[489210]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "osd_id": 2,
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "type": "bluestore"
Dec  3 19:24:58 compute-0 focused_darwin[489210]:    },
Dec  3 19:24:58 compute-0 focused_darwin[489210]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "osd_id": 0,
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:24:58 compute-0 focused_darwin[489210]:        "type": "bluestore"
Dec  3 19:24:58 compute-0 focused_darwin[489210]:    }
Dec  3 19:24:58 compute-0 focused_darwin[489210]: }
Dec  3 19:24:58 compute-0 systemd[1]: libpod-dd86eb7852a85f5033d89af32e49bff5771d9e7f32026c2b6eac601ed3527323.scope: Deactivated successfully.
Dec  3 19:24:58 compute-0 systemd[1]: libpod-dd86eb7852a85f5033d89af32e49bff5771d9e7f32026c2b6eac601ed3527323.scope: Consumed 1.255s CPU time.
Dec  3 19:24:58 compute-0 podman[489194]: 2025-12-03 19:24:58.654912305 +0000 UTC m=+1.589654916 container died dd86eb7852a85f5033d89af32e49bff5771d9e7f32026c2b6eac601ed3527323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-35eb1187c87c83d647ed1e0db7dc1c59f24b6dc471d604c5dcf662bcd7242a4f-merged.mount: Deactivated successfully.
Dec  3 19:24:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:24:58 compute-0 podman[489194]: 2025-12-03 19:24:58.757369535 +0000 UTC m=+1.692112116 container remove dd86eb7852a85f5033d89af32e49bff5771d9e7f32026c2b6eac601ed3527323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:24:58 compute-0 systemd[1]: libpod-conmon-dd86eb7852a85f5033d89af32e49bff5771d9e7f32026c2b6eac601ed3527323.scope: Deactivated successfully.
Dec  3 19:24:58 compute-0 podman[489247]: 2025-12-03 19:24:58.803619439 +0000 UTC m=+0.097969582 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 19:24:58 compute-0 podman[489248]: 2025-12-03 19:24:58.808066757 +0000 UTC m=+0.107874352 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 19:24:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:24:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:24:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:24:58 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:24:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f0b39a63-1073-4ebe-acf7-68d8bdb164a1 does not exist
Dec  3 19:24:58 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9b383f4c-0608-463c-aa01-4881a2a568b3 does not exist
Dec  3 19:24:58 compute-0 podman[489245]: 2025-12-03 19:24:58.863265097 +0000 UTC m=+0.158900861 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Dec  3 19:24:59 compute-0 nova_compute[348325]: 2025-12-03 19:24:59.035 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:59 compute-0 nova_compute[348325]: 2025-12-03 19:24:59.333 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:24:59 compute-0 podman[158200]: time="2025-12-03T19:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:24:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:24:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8196 "" "Go-http-client/1.1"
Dec  3 19:24:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:24:59 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:25:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:01 compute-0 openstack_network_exporter[365222]: ERROR   19:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:25:01 compute-0 openstack_network_exporter[365222]: ERROR   19:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:25:01 compute-0 openstack_network_exporter[365222]: ERROR   19:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:25:01 compute-0 openstack_network_exporter[365222]: ERROR   19:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:25:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:25:01 compute-0 openstack_network_exporter[365222]: ERROR   19:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:25:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:25:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:04 compute-0 nova_compute[348325]: 2025-12-03 19:25:04.041 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:04 compute-0 nova_compute[348325]: 2025-12-03 19:25:04.335 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:04 compute-0 podman[489367]: 2025-12-03 19:25:04.742992758 +0000 UTC m=+0.110602837 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:25:04 compute-0 podman[489366]: 2025-12-03 19:25:04.753169403 +0000 UTC m=+0.116199082 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:25:04 compute-0 podman[489365]: 2025-12-03 19:25:04.753390438 +0000 UTC m=+0.117160995 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 19:25:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:05 compute-0 nova_compute[348325]: 2025-12-03 19:25:05.500 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:25:05 compute-0 nova_compute[348325]: 2025-12-03 19:25:05.531 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:25:05 compute-0 nova_compute[348325]: 2025-12-03 19:25:05.532 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:25:05 compute-0 nova_compute[348325]: 2025-12-03 19:25:05.532 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:25:05 compute-0 nova_compute[348325]: 2025-12-03 19:25:05.533 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:25:05 compute-0 nova_compute[348325]: 2025-12-03 19:25:05.533 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:25:06 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:25:06 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256088899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:25:06 compute-0 nova_compute[348325]: 2025-12-03 19:25:06.216 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:25:06 compute-0 nova_compute[348325]: 2025-12-03 19:25:06.640 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:25:06 compute-0 nova_compute[348325]: 2025-12-03 19:25:06.642 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3972MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:25:06 compute-0 nova_compute[348325]: 2025-12-03 19:25:06.643 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:25:06 compute-0 nova_compute[348325]: 2025-12-03 19:25:06.643 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:25:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:06 compute-0 nova_compute[348325]: 2025-12-03 19:25:06.864 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:25:06 compute-0 nova_compute[348325]: 2025-12-03 19:25:06.866 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:25:06 compute-0 nova_compute[348325]: 2025-12-03 19:25:06.890 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:25:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:25:07 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232831268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:25:07 compute-0 nova_compute[348325]: 2025-12-03 19:25:07.804 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:25:07 compute-0 nova_compute[348325]: 2025-12-03 19:25:07.825 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:25:07 compute-0 nova_compute[348325]: 2025-12-03 19:25:07.866 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:25:07 compute-0 nova_compute[348325]: 2025-12-03 19:25:07.869 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:25:07 compute-0 nova_compute[348325]: 2025-12-03 19:25:07.870 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:25:07 compute-0 nova_compute[348325]: 2025-12-03 19:25:07.916 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:25:07 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Dec  3 19:25:07 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:07.955818) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:25:07 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Dec  3 19:25:07 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789907955906, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1033, "num_deletes": 251, "total_data_size": 1518027, "memory_usage": 1545088, "flush_reason": "Manual Compaction"}
Dec  3 19:25:07 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789908508212, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1492867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52325, "largest_seqno": 53357, "table_properties": {"data_size": 1487755, "index_size": 2636, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10931, "raw_average_key_size": 19, "raw_value_size": 1477553, "raw_average_value_size": 2667, "num_data_blocks": 119, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764789807, "oldest_key_time": 1764789807, "file_creation_time": 1764789907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 552520 microseconds, and 8771 cpu microseconds.
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:25:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:08.508336) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1492867 bytes OK
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:08.508367) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:08.855011) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:08.855059) EVENT_LOG_v1 {"time_micros": 1764789908855048, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:08.855085) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1513137, prev total WAL file size 1513137, number of live WAL files 2.
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:08.856296) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1457KB)], [125(9280KB)]
Dec  3 19:25:08 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789908856396, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10995737, "oldest_snapshot_seqno": -1}
Dec  3 19:25:09 compute-0 nova_compute[348325]: 2025-12-03 19:25:09.046 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:09 compute-0 nova_compute[348325]: 2025-12-03 19:25:09.340 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 6704 keys, 9326229 bytes, temperature: kUnknown
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789909543693, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 9326229, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9283374, "index_size": 25014, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 175910, "raw_average_key_size": 26, "raw_value_size": 9164050, "raw_average_value_size": 1366, "num_data_blocks": 990, "num_entries": 6704, "num_filter_entries": 6704, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764789908, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:09.544946) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 9326229 bytes
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:09.550166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 16.0 rd, 13.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(13.6) write-amplify(6.2) OK, records in: 7218, records dropped: 514 output_compression: NoCompression
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:09.550226) EVENT_LOG_v1 {"time_micros": 1764789909550200, "job": 76, "event": "compaction_finished", "compaction_time_micros": 688161, "compaction_time_cpu_micros": 47339, "output_level": 6, "num_output_files": 1, "total_output_size": 9326229, "num_input_records": 7218, "num_output_records": 6704, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:08.855921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:09.550683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:09.550692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:09.550697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:09.550701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:25:09.550705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789909551253, "job": 0, "event": "table_file_deletion", "file_number": 127}
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:25:09 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764789909555333, "job": 0, "event": "table_file_deletion", "file_number": 125}
Dec  3 19:25:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:12 compute-0 podman[489463]: 2025-12-03 19:25:12.943931607 +0000 UTC m=+0.096943238 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.266 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.267 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.273 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:25:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:25:14 compute-0 nova_compute[348325]: 2025-12-03 19:25:14.051 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:25:14
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'images', '.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'backups']
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:25:14 compute-0 nova_compute[348325]: 2025-12-03 19:25:14.343 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:25:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:25:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:18 compute-0 podman[489487]: 2025-12-03 19:25:18.966135915 +0000 UTC m=+0.112596555 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  3 19:25:19 compute-0 podman[489486]: 2025-12-03 19:25:19.006878267 +0000 UTC m=+0.157915497 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 19:25:19 compute-0 nova_compute[348325]: 2025-12-03 19:25:19.053 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:19 compute-0 nova_compute[348325]: 2025-12-03 19:25:19.345 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:25:23.390 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:25:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:25:23.390 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:25:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:25:23.391 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:25:24 compute-0 nova_compute[348325]: 2025-12-03 19:25:24.058 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:24 compute-0 nova_compute[348325]: 2025-12-03 19:25:24.349 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:25:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:25:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:29 compute-0 nova_compute[348325]: 2025-12-03 19:25:29.062 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:29 compute-0 nova_compute[348325]: 2025-12-03 19:25:29.352 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:29 compute-0 podman[158200]: time="2025-12-03T19:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:25:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:25:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8207 "" "Go-http-client/1.1"
Dec  3 19:25:29 compute-0 podman[489531]: 2025-12-03 19:25:29.933596446 +0000 UTC m=+0.087004308 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:25:29 compute-0 podman[489532]: 2025-12-03 19:25:29.938205448 +0000 UTC m=+0.092353918 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41)
Dec  3 19:25:29 compute-0 podman[489530]: 2025-12-03 19:25:29.971274335 +0000 UTC m=+0.132856235 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:25:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:31 compute-0 openstack_network_exporter[365222]: ERROR   19:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:25:31 compute-0 openstack_network_exporter[365222]: ERROR   19:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:25:31 compute-0 openstack_network_exporter[365222]: ERROR   19:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:25:31 compute-0 openstack_network_exporter[365222]: ERROR   19:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:25:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:25:31 compute-0 openstack_network_exporter[365222]: ERROR   19:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:25:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:25:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:34 compute-0 nova_compute[348325]: 2025-12-03 19:25:34.067 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:34 compute-0 nova_compute[348325]: 2025-12-03 19:25:34.356 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:34 compute-0 podman[489593]: 2025-12-03 19:25:34.960928078 +0000 UTC m=+0.111280624 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 19:25:34 compute-0 podman[489591]: 2025-12-03 19:25:34.964691819 +0000 UTC m=+0.120191929 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, name=ubi9, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, config_id=edpm, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Dec  3 19:25:35 compute-0 podman[489592]: 2025-12-03 19:25:35.0050034 +0000 UTC m=+0.151337989 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 19:25:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:37 compute-0 podman[158200]: time="2025-12-03T19:25:37Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:25:37 compute-0 nova_compute[348325]: 2025-12-03 19:25:37.508 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:25:37 compute-0 podman[158200]: @ - - [03/Dec/2025:19:25:37 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 43257 "" "Go-http-client/1.1"
Dec  3 19:25:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:25:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3065870532' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:25:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:25:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3065870532' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:25:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:39 compute-0 nova_compute[348325]: 2025-12-03 19:25:39.072 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:39 compute-0 nova_compute[348325]: 2025-12-03 19:25:39.361 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:40 compute-0 nova_compute[348325]: 2025-12-03 19:25:40.478 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:25:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:43 compute-0 podman[489645]: 2025-12-03 19:25:43.974535076 +0000 UTC m=+0.139239849 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:25:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:25:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:25:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:25:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:25:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:25:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:25:44 compute-0 nova_compute[348325]: 2025-12-03 19:25:44.077 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:44 compute-0 nova_compute[348325]: 2025-12-03 19:25:44.364 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:44 compute-0 nova_compute[348325]: 2025-12-03 19:25:44.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:25:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:45 compute-0 nova_compute[348325]: 2025-12-03 19:25:45.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:25:46 compute-0 nova_compute[348325]: 2025-12-03 19:25:46.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:25:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:47 compute-0 nova_compute[348325]: 2025-12-03 19:25:47.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:25:47 compute-0 nova_compute[348325]: 2025-12-03 19:25:47.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:25:47 compute-0 nova_compute[348325]: 2025-12-03 19:25:47.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:25:47 compute-0 nova_compute[348325]: 2025-12-03 19:25:47.505 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 19:25:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:49 compute-0 nova_compute[348325]: 2025-12-03 19:25:49.082 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:49 compute-0 nova_compute[348325]: 2025-12-03 19:25:49.369 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:49 compute-0 nova_compute[348325]: 2025-12-03 19:25:49.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:25:50 compute-0 podman[489673]: 2025-12-03 19:25:50.001977647 +0000 UTC m=+0.155751806 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 19:25:50 compute-0 podman[489672]: 2025-12-03 19:25:50.009794476 +0000 UTC m=+0.167889859 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec  3 19:25:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:54 compute-0 nova_compute[348325]: 2025-12-03 19:25:54.085 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:54 compute-0 nova_compute[348325]: 2025-12-03 19:25:54.371 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:54 compute-0 nova_compute[348325]: 2025-12-03 19:25:54.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:25:54 compute-0 nova_compute[348325]: 2025-12-03 19:25:54.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:25:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:25:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:25:59 compute-0 nova_compute[348325]: 2025-12-03 19:25:59.088 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:59 compute-0 nova_compute[348325]: 2025-12-03 19:25:59.373 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:25:59 compute-0 podman[158200]: time="2025-12-03T19:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:25:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:25:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8203 "" "Go-http-client/1.1"
Dec  3 19:26:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:26:00 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:26:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:26:00 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:26:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:26:00 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:26:00 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 344bfda2-7999-4b79-97a1-3328badcbbd9 does not exist
Dec  3 19:26:00 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev ddafe664-aa67-44ee-b892-c5b7b76faa7f does not exist
Dec  3 19:26:00 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev ddf936b1-6f6f-4f7e-aad8-35ba812d72ac does not exist
Dec  3 19:26:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:26:00 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:26:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:26:00 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:26:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:26:00 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:26:00 compute-0 podman[489869]: 2025-12-03 19:26:00.280011905 +0000 UTC m=+0.084023197 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 19:26:00 compute-0 podman[489868]: 2025-12-03 19:26:00.302011455 +0000 UTC m=+0.100440282 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 19:26:00 compute-0 podman[489870]: 2025-12-03 19:26:00.324061987 +0000 UTC m=+0.114841819 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41)
Dec  3 19:26:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:00 compute-0 podman[490042]: 2025-12-03 19:26:00.939884963 +0000 UTC m=+0.063991843 container create 7282b3d8f248a8588b2dd292fe60860fdc1059a313d2d291706ccfce7eae8ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:26:00 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:26:00 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:26:00 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:26:01 compute-0 systemd[1]: Started libpod-conmon-7282b3d8f248a8588b2dd292fe60860fdc1059a313d2d291706ccfce7eae8ae2.scope.
Dec  3 19:26:01 compute-0 podman[490042]: 2025-12-03 19:26:00.908147729 +0000 UTC m=+0.032254609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:26:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:26:01 compute-0 podman[490042]: 2025-12-03 19:26:01.070221096 +0000 UTC m=+0.194328096 container init 7282b3d8f248a8588b2dd292fe60860fdc1059a313d2d291706ccfce7eae8ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:26:01 compute-0 podman[490042]: 2025-12-03 19:26:01.089330786 +0000 UTC m=+0.213437656 container start 7282b3d8f248a8588b2dd292fe60860fdc1059a313d2d291706ccfce7eae8ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:26:01 compute-0 podman[490042]: 2025-12-03 19:26:01.094981932 +0000 UTC m=+0.219088872 container attach 7282b3d8f248a8588b2dd292fe60860fdc1059a313d2d291706ccfce7eae8ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 19:26:01 compute-0 stupefied_pasteur[490058]: 167 167
Dec  3 19:26:01 compute-0 systemd[1]: libpod-7282b3d8f248a8588b2dd292fe60860fdc1059a313d2d291706ccfce7eae8ae2.scope: Deactivated successfully.
Dec  3 19:26:01 compute-0 podman[490042]: 2025-12-03 19:26:01.10107934 +0000 UTC m=+0.225186210 container died 7282b3d8f248a8588b2dd292fe60860fdc1059a313d2d291706ccfce7eae8ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 19:26:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-21b1e58f5f2e16968f8c936b7914896f46a653b3b447aa5d40ad6f722ed100a8-merged.mount: Deactivated successfully.
Dec  3 19:26:01 compute-0 podman[490042]: 2025-12-03 19:26:01.173146747 +0000 UTC m=+0.297253617 container remove 7282b3d8f248a8588b2dd292fe60860fdc1059a313d2d291706ccfce7eae8ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 19:26:01 compute-0 systemd[1]: libpod-conmon-7282b3d8f248a8588b2dd292fe60860fdc1059a313d2d291706ccfce7eae8ae2.scope: Deactivated successfully.
Dec  3 19:26:01 compute-0 podman[490081]: 2025-12-03 19:26:01.407738823 +0000 UTC m=+0.082251964 container create 1a26d90a241a8e1c3390c8e9f371123fd08d8199a69e017ac2b080c867fc9f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 19:26:01 compute-0 openstack_network_exporter[365222]: ERROR   19:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:26:01 compute-0 openstack_network_exporter[365222]: ERROR   19:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:26:01 compute-0 openstack_network_exporter[365222]: ERROR   19:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:26:01 compute-0 openstack_network_exporter[365222]: ERROR   19:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:26:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:26:01 compute-0 openstack_network_exporter[365222]: ERROR   19:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:26:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:26:01 compute-0 podman[490081]: 2025-12-03 19:26:01.371008437 +0000 UTC m=+0.045521618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:26:01 compute-0 systemd[1]: Started libpod-conmon-1a26d90a241a8e1c3390c8e9f371123fd08d8199a69e017ac2b080c867fc9f98.scope.
Dec  3 19:26:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88688ef9283c88cb8c90a39fb6ff05e3467d7a887fbf35a3f7560b3161f1f42d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88688ef9283c88cb8c90a39fb6ff05e3467d7a887fbf35a3f7560b3161f1f42d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88688ef9283c88cb8c90a39fb6ff05e3467d7a887fbf35a3f7560b3161f1f42d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88688ef9283c88cb8c90a39fb6ff05e3467d7a887fbf35a3f7560b3161f1f42d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88688ef9283c88cb8c90a39fb6ff05e3467d7a887fbf35a3f7560b3161f1f42d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:01 compute-0 podman[490081]: 2025-12-03 19:26:01.550431134 +0000 UTC m=+0.224944285 container init 1a26d90a241a8e1c3390c8e9f371123fd08d8199a69e017ac2b080c867fc9f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:26:01 compute-0 podman[490081]: 2025-12-03 19:26:01.574698709 +0000 UTC m=+0.249211850 container start 1a26d90a241a8e1c3390c8e9f371123fd08d8199a69e017ac2b080c867fc9f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 19:26:01 compute-0 podman[490081]: 2025-12-03 19:26:01.580887947 +0000 UTC m=+0.255401148 container attach 1a26d90a241a8e1c3390c8e9f371123fd08d8199a69e017ac2b080c867fc9f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:26:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:02 compute-0 nice_dirac[490097]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:26:02 compute-0 nice_dirac[490097]: --> relative data size: 1.0
Dec  3 19:26:02 compute-0 nice_dirac[490097]: --> All data devices are unavailable
Dec  3 19:26:02 compute-0 systemd[1]: libpod-1a26d90a241a8e1c3390c8e9f371123fd08d8199a69e017ac2b080c867fc9f98.scope: Deactivated successfully.
Dec  3 19:26:02 compute-0 podman[490081]: 2025-12-03 19:26:02.853322104 +0000 UTC m=+1.527835215 container died 1a26d90a241a8e1c3390c8e9f371123fd08d8199a69e017ac2b080c867fc9f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:26:02 compute-0 systemd[1]: libpod-1a26d90a241a8e1c3390c8e9f371123fd08d8199a69e017ac2b080c867fc9f98.scope: Consumed 1.228s CPU time.
Dec  3 19:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-88688ef9283c88cb8c90a39fb6ff05e3467d7a887fbf35a3f7560b3161f1f42d-merged.mount: Deactivated successfully.
Dec  3 19:26:02 compute-0 podman[490081]: 2025-12-03 19:26:02.941880339 +0000 UTC m=+1.616393450 container remove 1a26d90a241a8e1c3390c8e9f371123fd08d8199a69e017ac2b080c867fc9f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:26:02 compute-0 systemd[1]: libpod-conmon-1a26d90a241a8e1c3390c8e9f371123fd08d8199a69e017ac2b080c867fc9f98.scope: Deactivated successfully.
Dec  3 19:26:03 compute-0 podman[490277]: 2025-12-03 19:26:03.99840502 +0000 UTC m=+0.072501739 container create 21df99cf35f352a6aaa9198b788f7d3a8d4ee716b211b9a1594f52d8fdfb8e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:26:04 compute-0 podman[490277]: 2025-12-03 19:26:03.972381403 +0000 UTC m=+0.046478102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:26:04 compute-0 systemd[1]: Started libpod-conmon-21df99cf35f352a6aaa9198b788f7d3a8d4ee716b211b9a1594f52d8fdfb8e2f.scope.
Dec  3 19:26:04 compute-0 nova_compute[348325]: 2025-12-03 19:26:04.093 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:26:04 compute-0 podman[490277]: 2025-12-03 19:26:04.150900007 +0000 UTC m=+0.224996756 container init 21df99cf35f352a6aaa9198b788f7d3a8d4ee716b211b9a1594f52d8fdfb8e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec  3 19:26:04 compute-0 podman[490277]: 2025-12-03 19:26:04.161248866 +0000 UTC m=+0.235345585 container start 21df99cf35f352a6aaa9198b788f7d3a8d4ee716b211b9a1594f52d8fdfb8e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 19:26:04 compute-0 podman[490277]: 2025-12-03 19:26:04.167337373 +0000 UTC m=+0.241434082 container attach 21df99cf35f352a6aaa9198b788f7d3a8d4ee716b211b9a1594f52d8fdfb8e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:26:04 compute-0 recursing_fermat[490293]: 167 167
Dec  3 19:26:04 compute-0 systemd[1]: libpod-21df99cf35f352a6aaa9198b788f7d3a8d4ee716b211b9a1594f52d8fdfb8e2f.scope: Deactivated successfully.
Dec  3 19:26:04 compute-0 podman[490277]: 2025-12-03 19:26:04.174900325 +0000 UTC m=+0.248997034 container died 21df99cf35f352a6aaa9198b788f7d3a8d4ee716b211b9a1594f52d8fdfb8e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 19:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c476aa964b7fdc49aaede4615f0610feb3a99b9397d3aa6b99ae81e274de2869-merged.mount: Deactivated successfully.
Dec  3 19:26:04 compute-0 podman[490277]: 2025-12-03 19:26:04.24892474 +0000 UTC m=+0.323021459 container remove 21df99cf35f352a6aaa9198b788f7d3a8d4ee716b211b9a1594f52d8fdfb8e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:26:04 compute-0 systemd[1]: libpod-conmon-21df99cf35f352a6aaa9198b788f7d3a8d4ee716b211b9a1594f52d8fdfb8e2f.scope: Deactivated successfully.
Dec  3 19:26:04 compute-0 nova_compute[348325]: 2025-12-03 19:26:04.375 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:04 compute-0 podman[490316]: 2025-12-03 19:26:04.541001221 +0000 UTC m=+0.104875548 container create df528caf9c8a9142c787ee51828a9763179cebfcca806546338fc2434f4abd6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 19:26:04 compute-0 podman[490316]: 2025-12-03 19:26:04.508076528 +0000 UTC m=+0.071950915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:26:04 compute-0 systemd[1]: Started libpod-conmon-df528caf9c8a9142c787ee51828a9763179cebfcca806546338fc2434f4abd6f.scope.
Dec  3 19:26:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54095e44bad238f01fdb15220ac463d77e0ff11f2b781e00bc0b981b4744794/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54095e44bad238f01fdb15220ac463d77e0ff11f2b781e00bc0b981b4744794/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54095e44bad238f01fdb15220ac463d77e0ff11f2b781e00bc0b981b4744794/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54095e44bad238f01fdb15220ac463d77e0ff11f2b781e00bc0b981b4744794/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:04 compute-0 podman[490316]: 2025-12-03 19:26:04.726794971 +0000 UTC m=+0.290669338 container init df528caf9c8a9142c787ee51828a9763179cebfcca806546338fc2434f4abd6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 19:26:04 compute-0 podman[490316]: 2025-12-03 19:26:04.747730646 +0000 UTC m=+0.311604983 container start df528caf9c8a9142c787ee51828a9763179cebfcca806546338fc2434f4abd6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:26:04 compute-0 podman[490316]: 2025-12-03 19:26:04.755627056 +0000 UTC m=+0.319501433 container attach df528caf9c8a9142c787ee51828a9763179cebfcca806546338fc2434f4abd6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 19:26:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]: {
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:    "0": [
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:        {
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "devices": [
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "/dev/loop3"
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            ],
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_name": "ceph_lv0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_size": "21470642176",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "name": "ceph_lv0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "tags": {
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.cluster_name": "ceph",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.crush_device_class": "",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.encrypted": "0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.osd_id": "0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.type": "block",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.vdo": "0"
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            },
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "type": "block",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "vg_name": "ceph_vg0"
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:        }
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:    ],
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:    "1": [
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:        {
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "devices": [
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "/dev/loop4"
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            ],
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_name": "ceph_lv1",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_size": "21470642176",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "name": "ceph_lv1",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "tags": {
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.cluster_name": "ceph",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.crush_device_class": "",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.encrypted": "0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.osd_id": "1",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.type": "block",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.vdo": "0"
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            },
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "type": "block",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "vg_name": "ceph_vg1"
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:        }
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:    ],
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:    "2": [
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:        {
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "devices": [
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "/dev/loop5"
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            ],
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_name": "ceph_lv2",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_size": "21470642176",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "name": "ceph_lv2",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "tags": {
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.cluster_name": "ceph",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.crush_device_class": "",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.encrypted": "0",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.osd_id": "2",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.type": "block",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:                "ceph.vdo": "0"
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            },
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "type": "block",
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:            "vg_name": "ceph_vg2"
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:        }
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]:    ]
Dec  3 19:26:05 compute-0 blissful_heisenberg[490332]: }
Dec  3 19:26:05 compute-0 systemd[1]: libpod-df528caf9c8a9142c787ee51828a9763179cebfcca806546338fc2434f4abd6f.scope: Deactivated successfully.
Dec  3 19:26:05 compute-0 podman[490316]: 2025-12-03 19:26:05.631324838 +0000 UTC m=+1.195199185 container died df528caf9c8a9142c787ee51828a9763179cebfcca806546338fc2434f4abd6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heisenberg, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:26:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f54095e44bad238f01fdb15220ac463d77e0ff11f2b781e00bc0b981b4744794-merged.mount: Deactivated successfully.
Dec  3 19:26:05 compute-0 podman[490316]: 2025-12-03 19:26:05.759735624 +0000 UTC m=+1.323609931 container remove df528caf9c8a9142c787ee51828a9763179cebfcca806546338fc2434f4abd6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 19:26:05 compute-0 systemd[1]: libpod-conmon-df528caf9c8a9142c787ee51828a9763179cebfcca806546338fc2434f4abd6f.scope: Deactivated successfully.
Dec  3 19:26:05 compute-0 podman[490342]: 2025-12-03 19:26:05.841644488 +0000 UTC m=+0.150275654 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, name=ubi9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, distribution-scope=public, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 19:26:05 compute-0 podman[490350]: 2025-12-03 19:26:05.860758189 +0000 UTC m=+0.161738610 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  3 19:26:05 compute-0 podman[490349]: 2025-12-03 19:26:05.870422422 +0000 UTC m=+0.180909902 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  3 19:26:06 compute-0 podman[490547]: 2025-12-03 19:26:06.785730269 +0000 UTC m=+0.075723446 container create 0f91e587288cc278459bea4727c84a5569492d2157fa33a6d0f06d732a0396e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cori, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:26:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:06 compute-0 podman[490547]: 2025-12-03 19:26:06.751667597 +0000 UTC m=+0.041660824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:26:06 compute-0 systemd[1]: Started libpod-conmon-0f91e587288cc278459bea4727c84a5569492d2157fa33a6d0f06d732a0396e6.scope.
Dec  3 19:26:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:26:06 compute-0 podman[490547]: 2025-12-03 19:26:06.929417653 +0000 UTC m=+0.219410830 container init 0f91e587288cc278459bea4727c84a5569492d2157fa33a6d0f06d732a0396e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:26:06 compute-0 podman[490547]: 2025-12-03 19:26:06.944050076 +0000 UTC m=+0.234043223 container start 0f91e587288cc278459bea4727c84a5569492d2157fa33a6d0f06d732a0396e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cori, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:26:06 compute-0 podman[490547]: 2025-12-03 19:26:06.951376062 +0000 UTC m=+0.241369299 container attach 0f91e587288cc278459bea4727c84a5569492d2157fa33a6d0f06d732a0396e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cori, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Dec  3 19:26:06 compute-0 zen_cori[490564]: 167 167
Dec  3 19:26:06 compute-0 systemd[1]: libpod-0f91e587288cc278459bea4727c84a5569492d2157fa33a6d0f06d732a0396e6.scope: Deactivated successfully.
Dec  3 19:26:06 compute-0 podman[490547]: 2025-12-03 19:26:06.955358429 +0000 UTC m=+0.245351606 container died 0f91e587288cc278459bea4727c84a5569492d2157fa33a6d0f06d732a0396e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cori, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-798453ec0432b2e4a181a5eb0ff53747465fcd570437eaa63f712e65615e23e7-merged.mount: Deactivated successfully.
Dec  3 19:26:07 compute-0 podman[490547]: 2025-12-03 19:26:07.028107403 +0000 UTC m=+0.318100560 container remove 0f91e587288cc278459bea4727c84a5569492d2157fa33a6d0f06d732a0396e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:26:07 compute-0 systemd[1]: libpod-conmon-0f91e587288cc278459bea4727c84a5569492d2157fa33a6d0f06d732a0396e6.scope: Deactivated successfully.
Dec  3 19:26:07 compute-0 podman[490587]: 2025-12-03 19:26:07.294964066 +0000 UTC m=+0.085733308 container create cc71cb8b4a7de884253417e4b3031ce079a8fba251836a1b0f717b0560bbd860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 19:26:07 compute-0 podman[490587]: 2025-12-03 19:26:07.255345521 +0000 UTC m=+0.046114853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:26:07 compute-0 systemd[1]: Started libpod-conmon-cc71cb8b4a7de884253417e4b3031ce079a8fba251836a1b0f717b0560bbd860.scope.
Dec  3 19:26:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422e8ebf0a63f273e6da753715e45988ab90faabf17f256000a9b83178ddaf73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422e8ebf0a63f273e6da753715e45988ab90faabf17f256000a9b83178ddaf73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422e8ebf0a63f273e6da753715e45988ab90faabf17f256000a9b83178ddaf73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422e8ebf0a63f273e6da753715e45988ab90faabf17f256000a9b83178ddaf73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:26:07 compute-0 podman[490587]: 2025-12-03 19:26:07.424264543 +0000 UTC m=+0.215033795 container init cc71cb8b4a7de884253417e4b3031ce079a8fba251836a1b0f717b0560bbd860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 19:26:07 compute-0 podman[490587]: 2025-12-03 19:26:07.441093709 +0000 UTC m=+0.231862971 container start cc71cb8b4a7de884253417e4b3031ce079a8fba251836a1b0f717b0560bbd860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 19:26:07 compute-0 podman[490587]: 2025-12-03 19:26:07.448718603 +0000 UTC m=+0.239487885 container attach cc71cb8b4a7de884253417e4b3031ce079a8fba251836a1b0f717b0560bbd860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 19:26:07 compute-0 nova_compute[348325]: 2025-12-03 19:26:07.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:26:07 compute-0 nova_compute[348325]: 2025-12-03 19:26:07.549 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:26:07 compute-0 nova_compute[348325]: 2025-12-03 19:26:07.549 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:26:07 compute-0 nova_compute[348325]: 2025-12-03 19:26:07.549 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:26:07 compute-0 nova_compute[348325]: 2025-12-03 19:26:07.550 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:26:07 compute-0 nova_compute[348325]: 2025-12-03 19:26:07.556 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:26:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:26:08 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4018585044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:26:08 compute-0 nova_compute[348325]: 2025-12-03 19:26:08.046 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:26:08 compute-0 nova_compute[348325]: 2025-12-03 19:26:08.523 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:26:08 compute-0 nova_compute[348325]: 2025-12-03 19:26:08.526 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3895MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:26:08 compute-0 nova_compute[348325]: 2025-12-03 19:26:08.527 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:26:08 compute-0 nova_compute[348325]: 2025-12-03 19:26:08.528 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:26:08 compute-0 determined_ellis[490603]: {
Dec  3 19:26:08 compute-0 determined_ellis[490603]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "osd_id": 1,
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "type": "bluestore"
Dec  3 19:26:08 compute-0 determined_ellis[490603]:    },
Dec  3 19:26:08 compute-0 determined_ellis[490603]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "osd_id": 2,
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "type": "bluestore"
Dec  3 19:26:08 compute-0 determined_ellis[490603]:    },
Dec  3 19:26:08 compute-0 determined_ellis[490603]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "osd_id": 0,
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:26:08 compute-0 determined_ellis[490603]:        "type": "bluestore"
Dec  3 19:26:08 compute-0 determined_ellis[490603]:    }
Dec  3 19:26:08 compute-0 determined_ellis[490603]: }
Dec  3 19:26:08 compute-0 nova_compute[348325]: 2025-12-03 19:26:08.623 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:26:08 compute-0 nova_compute[348325]: 2025-12-03 19:26:08.624 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:26:08 compute-0 systemd[1]: libpod-cc71cb8b4a7de884253417e4b3031ce079a8fba251836a1b0f717b0560bbd860.scope: Deactivated successfully.
Dec  3 19:26:08 compute-0 systemd[1]: libpod-cc71cb8b4a7de884253417e4b3031ce079a8fba251836a1b0f717b0560bbd860.scope: Consumed 1.165s CPU time.
Dec  3 19:26:08 compute-0 podman[490587]: 2025-12-03 19:26:08.644101042 +0000 UTC m=+1.434870284 container died cc71cb8b4a7de884253417e4b3031ce079a8fba251836a1b0f717b0560bbd860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:26:08 compute-0 nova_compute[348325]: 2025-12-03 19:26:08.645 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-422e8ebf0a63f273e6da753715e45988ab90faabf17f256000a9b83178ddaf73-merged.mount: Deactivated successfully.
Dec  3 19:26:08 compute-0 podman[490587]: 2025-12-03 19:26:08.761973963 +0000 UTC m=+1.552743205 container remove cc71cb8b4a7de884253417e4b3031ce079a8fba251836a1b0f717b0560bbd860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 19:26:08 compute-0 systemd[1]: libpod-conmon-cc71cb8b4a7de884253417e4b3031ce079a8fba251836a1b0f717b0560bbd860.scope: Deactivated successfully.
Dec  3 19:26:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:26:08 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:26:08 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:26:08 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:26:08 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev f0967cca-93fa-448e-9bf2-db325105bcbd does not exist
Dec  3 19:26:08 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d682b4f0-cc93-4fc2-97e2-f91241b9a5c5 does not exist
Dec  3 19:26:08 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:26:08 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:26:09 compute-0 nova_compute[348325]: 2025-12-03 19:26:09.097 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:26:09 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704471027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:26:09 compute-0 nova_compute[348325]: 2025-12-03 19:26:09.188 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:26:09 compute-0 nova_compute[348325]: 2025-12-03 19:26:09.197 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:26:09 compute-0 nova_compute[348325]: 2025-12-03 19:26:09.215 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:26:09 compute-0 nova_compute[348325]: 2025-12-03 19:26:09.217 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:26:09 compute-0 nova_compute[348325]: 2025-12-03 19:26:09.218 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:26:09 compute-0 nova_compute[348325]: 2025-12-03 19:26:09.379 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Dec  3 19:26:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:26:14 compute-0 nova_compute[348325]: 2025-12-03 19:26:14.101 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:26:14
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['images', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta']
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:26:14 compute-0 nova_compute[348325]: 2025-12-03 19:26:14.382 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Dec  3 19:26:14 compute-0 podman[490741]: 2025-12-03 19:26:14.835908269 +0000 UTC m=+0.122476794 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:26:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:26:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Dec  3 19:26:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 19:26:19 compute-0 nova_compute[348325]: 2025-12-03 19:26:19.107 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:19 compute-0 nova_compute[348325]: 2025-12-03 19:26:19.384 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 19:26:21 compute-0 podman[490768]: 2025-12-03 19:26:21.037208885 +0000 UTC m=+0.180030551 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 19:26:21 compute-0 podman[490767]: 2025-12-03 19:26:21.061791968 +0000 UTC m=+0.214651326 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 19:26:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Dec  3 19:26:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:26:23.391 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:26:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:26:23.391 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:26:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:26:23.392 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:26:24 compute-0 nova_compute[348325]: 2025-12-03 19:26:24.111 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:24 compute-0 nova_compute[348325]: 2025-12-03 19:26:24.388 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:26:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:26:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Dec  3 19:26:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 3 op/s
Dec  3 19:26:29 compute-0 nova_compute[348325]: 2025-12-03 19:26:29.115 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:29 compute-0 nova_compute[348325]: 2025-12-03 19:26:29.391 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:29 compute-0 podman[158200]: time="2025-12-03T19:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:26:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:26:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8204 "" "Go-http-client/1.1"
Dec  3 19:26:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:30 compute-0 podman[490808]: 2025-12-03 19:26:30.945412687 +0000 UTC m=+0.103218959 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:26:30 compute-0 podman[490810]: 2025-12-03 19:26:30.949264771 +0000 UTC m=+0.097834410 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, version=9.6, io.openshift.tags=minimal rhel9)
Dec  3 19:26:30 compute-0 podman[490809]: 2025-12-03 19:26:30.95301009 +0000 UTC m=+0.094496719 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 19:26:31 compute-0 openstack_network_exporter[365222]: ERROR   19:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:26:31 compute-0 openstack_network_exporter[365222]: ERROR   19:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:26:31 compute-0 openstack_network_exporter[365222]: ERROR   19:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:26:31 compute-0 openstack_network_exporter[365222]: ERROR   19:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:26:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:26:31 compute-0 openstack_network_exporter[365222]: ERROR   19:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:26:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:26:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:34 compute-0 nova_compute[348325]: 2025-12-03 19:26:34.121 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:34 compute-0 nova_compute[348325]: 2025-12-03 19:26:34.393 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:36 compute-0 podman[490872]: 2025-12-03 19:26:36.969675048 +0000 UTC m=+0.114450518 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 19:26:36 compute-0 podman[490870]: 2025-12-03 19:26:36.981042764 +0000 UTC m=+0.146230449 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vcs-type=git, build-date=2024-09-18T21:23:30, distribution-scope=public, name=ubi9, release=1214.1726694543, architecture=x86_64)
Dec  3 19:26:37 compute-0 podman[490871]: 2025-12-03 19:26:37.003097619 +0000 UTC m=+0.152253925 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:26:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:26:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3550190454' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:26:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:26:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3550190454' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:26:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:39 compute-0 nova_compute[348325]: 2025-12-03 19:26:39.125 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:39 compute-0 nova_compute[348325]: 2025-12-03 19:26:39.218 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:26:39 compute-0 nova_compute[348325]: 2025-12-03 19:26:39.395 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:41 compute-0 nova_compute[348325]: 2025-12-03 19:26:41.478 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:26:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:26:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:26:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:26:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:26:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:26:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:26:44 compute-0 nova_compute[348325]: 2025-12-03 19:26:44.131 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:44 compute-0 nova_compute[348325]: 2025-12-03 19:26:44.398 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:45 compute-0 nova_compute[348325]: 2025-12-03 19:26:45.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:26:45 compute-0 nova_compute[348325]: 2025-12-03 19:26:45.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:26:45 compute-0 podman[490925]: 2025-12-03 19:26:45.972347734 +0000 UTC m=+0.135356345 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 19:26:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:48 compute-0 nova_compute[348325]: 2025-12-03 19:26:48.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:26:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:49 compute-0 nova_compute[348325]: 2025-12-03 19:26:49.138 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:49 compute-0 nova_compute[348325]: 2025-12-03 19:26:49.401 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:49 compute-0 nova_compute[348325]: 2025-12-03 19:26:49.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:26:49 compute-0 nova_compute[348325]: 2025-12-03 19:26:49.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:26:49 compute-0 nova_compute[348325]: 2025-12-03 19:26:49.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:26:49 compute-0 nova_compute[348325]: 2025-12-03 19:26:49.639 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 19:26:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:51 compute-0 nova_compute[348325]: 2025-12-03 19:26:51.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:26:51 compute-0 podman[490949]: 2025-12-03 19:26:51.903788167 +0000 UTC m=+0.068193486 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  3 19:26:51 compute-0 podman[490948]: 2025-12-03 19:26:51.963511425 +0000 UTC m=+0.129969605 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 19:26:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:54 compute-0 nova_compute[348325]: 2025-12-03 19:26:54.142 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:54 compute-0 nova_compute[348325]: 2025-12-03 19:26:54.405 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:55 compute-0 nova_compute[348325]: 2025-12-03 19:26:55.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:26:55 compute-0 nova_compute[348325]: 2025-12-03 19:26:55.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:26:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:26:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:26:59 compute-0 nova_compute[348325]: 2025-12-03 19:26:59.148 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:59 compute-0 nova_compute[348325]: 2025-12-03 19:26:59.408 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:26:59 compute-0 podman[158200]: time="2025-12-03T19:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:26:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:26:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8204 "" "Go-http-client/1.1"
Dec  3 19:27:00 compute-0 nova_compute[348325]: 2025-12-03 19:27:00.479 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:27:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:01 compute-0 openstack_network_exporter[365222]: ERROR   19:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:27:01 compute-0 openstack_network_exporter[365222]: ERROR   19:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:27:01 compute-0 openstack_network_exporter[365222]: ERROR   19:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:27:01 compute-0 openstack_network_exporter[365222]: ERROR   19:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:27:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:27:01 compute-0 openstack_network_exporter[365222]: ERROR   19:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:27:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:27:01 compute-0 podman[490993]: 2025-12-03 19:27:01.959842072 +0000 UTC m=+0.099527346 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 19:27:02 compute-0 podman[490994]: 2025-12-03 19:27:02.006150775 +0000 UTC m=+0.141825472 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Dec  3 19:27:02 compute-0 podman[490992]: 2025-12-03 19:27:02.007428326 +0000 UTC m=+0.154686124 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:27:02 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:02 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:04 compute-0 nova_compute[348325]: 2025-12-03 19:27:04.153 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:04 compute-0 nova_compute[348325]: 2025-12-03 19:27:04.411 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:04 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:06 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:07 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:07 compute-0 podman[491052]: 2025-12-03 19:27:07.976085543 +0000 UTC m=+0.126526741 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 19:27:07 compute-0 podman[491053]: 2025-12-03 19:27:07.977053376 +0000 UTC m=+0.130642280 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 19:27:08 compute-0 podman[491054]: 2025-12-03 19:27:08.002670649 +0000 UTC m=+0.141034784 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 19:27:08 compute-0 nova_compute[348325]: 2025-12-03 19:27:08.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:27:08 compute-0 nova_compute[348325]: 2025-12-03 19:27:08.528 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:27:08 compute-0 nova_compute[348325]: 2025-12-03 19:27:08.528 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:27:08 compute-0 nova_compute[348325]: 2025-12-03 19:27:08.528 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:27:08 compute-0 nova_compute[348325]: 2025-12-03 19:27:08.528 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 19:27:08 compute-0 nova_compute[348325]: 2025-12-03 19:27:08.529 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:27:08 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:09 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:27:09 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3443982326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:27:09 compute-0 nova_compute[348325]: 2025-12-03 19:27:09.036 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:27:09 compute-0 nova_compute[348325]: 2025-12-03 19:27:09.156 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:09 compute-0 nova_compute[348325]: 2025-12-03 19:27:09.414 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:09 compute-0 nova_compute[348325]: 2025-12-03 19:27:09.487 348329 WARNING nova.virt.libvirt.driver [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 19:27:09 compute-0 nova_compute[348325]: 2025-12-03 19:27:09.489 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3965MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 19:27:09 compute-0 nova_compute[348325]: 2025-12-03 19:27:09.489 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:27:09 compute-0 nova_compute[348325]: 2025-12-03 19:27:09.489 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:27:09 compute-0 nova_compute[348325]: 2025-12-03 19:27:09.810 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 19:27:09 compute-0 nova_compute[348325]: 2025-12-03 19:27:09.810 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 19:27:09 compute-0 nova_compute[348325]: 2025-12-03 19:27:09.898 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 19:27:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:27:10 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:27:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 19:27:10 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:27:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 19:27:10 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:27:10 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 330a7847-da54-427a-934f-a53c35f54717 does not exist
Dec  3 19:27:10 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 9dc01456-2c7c-41b8-b197-4f27753ef548 does not exist
Dec  3 19:27:10 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev e436374d-2eaf-48fb-9999-ae89543800b2 does not exist
Dec  3 19:27:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 19:27:10 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 19:27:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 19:27:10 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:27:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:27:10 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:27:10 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 19:27:10 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:27:10 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 19:27:10 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 19:27:10 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/831832187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 19:27:10 compute-0 nova_compute[348325]: 2025-12-03 19:27:10.421 348329 DEBUG oslo_concurrency.processutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 19:27:10 compute-0 nova_compute[348325]: 2025-12-03 19:27:10.434 348329 DEBUG nova.compute.provider_tree [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed in ProviderTree for provider: 00cd1895-22aa-49c6-bdb2-0991af662704 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 19:27:10 compute-0 nova_compute[348325]: 2025-12-03 19:27:10.461 348329 DEBUG nova.scheduler.client.report [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Inventory has not changed for provider 00cd1895-22aa-49c6-bdb2-0991af662704 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 19:27:10 compute-0 nova_compute[348325]: 2025-12-03 19:27:10.463 348329 DEBUG nova.compute.resource_tracker [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 19:27:10 compute-0 nova_compute[348325]: 2025-12-03 19:27:10.463 348329 DEBUG oslo_concurrency.lockutils [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:27:10 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:11 compute-0 podman[491416]: 2025-12-03 19:27:11.401186871 +0000 UTC m=+0.100840708 container create b0dfb5c8f6eb2894dbd379eb70de46eb29f0f3bbd395c7351ab68013213d9a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 19:27:11 compute-0 podman[491416]: 2025-12-03 19:27:11.362516142 +0000 UTC m=+0.062170039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:27:11 compute-0 systemd[1]: Started libpod-conmon-b0dfb5c8f6eb2894dbd379eb70de46eb29f0f3bbd395c7351ab68013213d9a51.scope.
Dec  3 19:27:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:27:11 compute-0 podman[491416]: 2025-12-03 19:27:11.55609879 +0000 UTC m=+0.255752687 container init b0dfb5c8f6eb2894dbd379eb70de46eb29f0f3bbd395c7351ab68013213d9a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_blackburn, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:27:11 compute-0 podman[491416]: 2025-12-03 19:27:11.575582792 +0000 UTC m=+0.275236619 container start b0dfb5c8f6eb2894dbd379eb70de46eb29f0f3bbd395c7351ab68013213d9a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 19:27:11 compute-0 boring_blackburn[491432]: 167 167
Dec  3 19:27:11 compute-0 systemd[1]: libpod-b0dfb5c8f6eb2894dbd379eb70de46eb29f0f3bbd395c7351ab68013213d9a51.scope: Deactivated successfully.
Dec  3 19:27:11 compute-0 podman[491416]: 2025-12-03 19:27:11.589024568 +0000 UTC m=+0.288678455 container attach b0dfb5c8f6eb2894dbd379eb70de46eb29f0f3bbd395c7351ab68013213d9a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 19:27:11 compute-0 podman[491416]: 2025-12-03 19:27:11.590981455 +0000 UTC m=+0.290635272 container died b0dfb5c8f6eb2894dbd379eb70de46eb29f0f3bbd395c7351ab68013213d9a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_blackburn, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 19:27:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-317d82ed00098380095b0c1dd7f387ada9b5c1af7d847a421300e1946ebba76e-merged.mount: Deactivated successfully.
Dec  3 19:27:11 compute-0 podman[491416]: 2025-12-03 19:27:11.652596171 +0000 UTC m=+0.352249988 container remove b0dfb5c8f6eb2894dbd379eb70de46eb29f0f3bbd395c7351ab68013213d9a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:27:11 compute-0 systemd[1]: libpod-conmon-b0dfb5c8f6eb2894dbd379eb70de46eb29f0f3bbd395c7351ab68013213d9a51.scope: Deactivated successfully.
Dec  3 19:27:11 compute-0 podman[491456]: 2025-12-03 19:27:11.917253813 +0000 UTC m=+0.089014851 container create 5d1d3d6424f88c9b48ccb6e8e3ed3a8c0ac3cc61f20404838452c3307b0f78a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 19:27:11 compute-0 podman[491456]: 2025-12-03 19:27:11.876845883 +0000 UTC m=+0.048606981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:27:11 compute-0 systemd[1]: Started libpod-conmon-5d1d3d6424f88c9b48ccb6e8e3ed3a8c0ac3cc61f20404838452c3307b0f78a1.scope.
Dec  3 19:27:12 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c59a3954b44279742e79654265392ea07eacbdd72c3cb88fe7c5818eb39d95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c59a3954b44279742e79654265392ea07eacbdd72c3cb88fe7c5818eb39d95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c59a3954b44279742e79654265392ea07eacbdd72c3cb88fe7c5818eb39d95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c59a3954b44279742e79654265392ea07eacbdd72c3cb88fe7c5818eb39d95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c59a3954b44279742e79654265392ea07eacbdd72c3cb88fe7c5818eb39d95/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:12 compute-0 podman[491456]: 2025-12-03 19:27:12.085205637 +0000 UTC m=+0.256966675 container init 5d1d3d6424f88c9b48ccb6e8e3ed3a8c0ac3cc61f20404838452c3307b0f78a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 19:27:12 compute-0 podman[491456]: 2025-12-03 19:27:12.102846596 +0000 UTC m=+0.274607614 container start 5d1d3d6424f88c9b48ccb6e8e3ed3a8c0ac3cc61f20404838452c3307b0f78a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:27:12 compute-0 podman[491456]: 2025-12-03 19:27:12.107695784 +0000 UTC m=+0.279456792 container attach 5d1d3d6424f88c9b48ccb6e8e3ed3a8c0ac3cc61f20404838452c3307b0f78a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 19:27:12 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:12 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:13 compute-0 suspicious_heyrovsky[491472]: --> passed data devices: 0 physical, 3 LVM
Dec  3 19:27:13 compute-0 suspicious_heyrovsky[491472]: --> relative data size: 1.0
Dec  3 19:27:13 compute-0 suspicious_heyrovsky[491472]: --> All data devices are unavailable
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.267 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.268 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7eff8d7fffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff9026f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8daba2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff90799b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8f46ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d8a8650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ffef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fff50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7ff7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8d7fffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7eff8ef7c7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7eff8fdf00b0>] with cache [{}], pollster history [{'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.271 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7eff8d8a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7eff8d8a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7eff8d8a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7eff8d8a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7eff8d7ff9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7eff8d7fe840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7eff8d8a82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7eff8d7ff9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7eff8d8a8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7eff8f682330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7eff8d7ff4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7eff8d930c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7eff8d7ff4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7eff8d7ff530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7eff8d7ff590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7eff8d7ff5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7eff8d8a8620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7eff8d7ff650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7eff8d7ff6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7eff8d7ffa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7eff8d7ff710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7eff8d7fff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7eff8d7ff770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7eff8d7fff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7eff8d7fdac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7eff8d9b7ef0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 ceilometer_agent_compute[359790]: 2025-12-03 19:27:13.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 19:27:13 compute-0 systemd[1]: libpod-5d1d3d6424f88c9b48ccb6e8e3ed3a8c0ac3cc61f20404838452c3307b0f78a1.scope: Deactivated successfully.
Dec  3 19:27:13 compute-0 podman[491456]: 2025-12-03 19:27:13.302011383 +0000 UTC m=+1.473772391 container died 5d1d3d6424f88c9b48ccb6e8e3ed3a8c0ac3cc61f20404838452c3307b0f78a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 19:27:13 compute-0 systemd[1]: libpod-5d1d3d6424f88c9b48ccb6e8e3ed3a8c0ac3cc61f20404838452c3307b0f78a1.scope: Consumed 1.155s CPU time.
Dec  3 19:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-25c59a3954b44279742e79654265392ea07eacbdd72c3cb88fe7c5818eb39d95-merged.mount: Deactivated successfully.
Dec  3 19:27:13 compute-0 podman[491456]: 2025-12-03 19:27:13.402268336 +0000 UTC m=+1.574029344 container remove 5d1d3d6424f88c9b48ccb6e8e3ed3a8c0ac3cc61f20404838452c3307b0f78a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 19:27:13 compute-0 systemd[1]: libpod-conmon-5d1d3d6424f88c9b48ccb6e8e3ed3a8c0ac3cc61f20404838452c3307b0f78a1.scope: Deactivated successfully.
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Optimize plan auto_2025-12-03_19:27:14
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [balancer INFO root] do_upmap
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'vms', 'backups', 'images', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data']
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [balancer INFO root] prepared 0/10 changes
Dec  3 19:27:14 compute-0 nova_compute[348325]: 2025-12-03 19:27:14.160 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:14 compute-0 nova_compute[348325]: 2025-12-03 19:27:14.416 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:14 compute-0 podman[491654]: 2025-12-03 19:27:14.698229982 +0000 UTC m=+0.116570271 container create a0fabd728ed2b7c7b5d5ff125378820982b4eb6ba00067135731155a56103f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 19:27:14 compute-0 podman[491654]: 2025-12-03 19:27:14.660897726 +0000 UTC m=+0.079238085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:27:14 compute-0 systemd[1]: Started libpod-conmon-a0fabd728ed2b7c7b5d5ff125378820982b4eb6ba00067135731155a56103f6a.scope.
Dec  3 19:27:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:27:14 compute-0 podman[491654]: 2025-12-03 19:27:14.839714425 +0000 UTC m=+0.258054754 container init a0fabd728ed2b7c7b5d5ff125378820982b4eb6ba00067135731155a56103f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:14 compute-0 podman[491654]: 2025-12-03 19:27:14.857955837 +0000 UTC m=+0.276296146 container start a0fabd728ed2b7c7b5d5ff125378820982b4eb6ba00067135731155a56103f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 19:27:14 compute-0 podman[491654]: 2025-12-03 19:27:14.864744322 +0000 UTC m=+0.283084651 container attach a0fabd728ed2b7c7b5d5ff125378820982b4eb6ba00067135731155a56103f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:27:14 compute-0 recursing_robinson[491669]: 167 167
Dec  3 19:27:14 compute-0 systemd[1]: libpod-a0fabd728ed2b7c7b5d5ff125378820982b4eb6ba00067135731155a56103f6a.scope: Deactivated successfully.
Dec  3 19:27:14 compute-0 podman[491654]: 2025-12-03 19:27:14.872029249 +0000 UTC m=+0.290369518 container died a0fabd728ed2b7c7b5d5ff125378820982b4eb6ba00067135731155a56103f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:27:14 compute-0 ceph-mgr[193091]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 19:27:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7744e972b934ab9b153c8654f484164746264c977d7bc7e76b99fab27fb5b72d-merged.mount: Deactivated successfully.
Dec  3 19:27:14 compute-0 podman[491654]: 2025-12-03 19:27:14.932052775 +0000 UTC m=+0.350393064 container remove a0fabd728ed2b7c7b5d5ff125378820982b4eb6ba00067135731155a56103f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 19:27:14 compute-0 systemd[1]: libpod-conmon-a0fabd728ed2b7c7b5d5ff125378820982b4eb6ba00067135731155a56103f6a.scope: Deactivated successfully.
Dec  3 19:27:15 compute-0 podman[491694]: 2025-12-03 19:27:15.1922844 +0000 UTC m=+0.086382248 container create 6c6d053d5bfa3049a34342fa4d853e1bf44d08930f40817e53e6f0b70ab3a738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 19:27:15 compute-0 podman[491694]: 2025-12-03 19:27:15.158756546 +0000 UTC m=+0.052854444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:27:15 compute-0 systemd[1]: Started libpod-conmon-6c6d053d5bfa3049a34342fa4d853e1bf44d08930f40817e53e6f0b70ab3a738.scope.
Dec  3 19:27:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f497c61d3f15a96409aaeaf2215f2c44bdef00d4afd2b0c1f74c2b0bef9c9bbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f497c61d3f15a96409aaeaf2215f2c44bdef00d4afd2b0c1f74c2b0bef9c9bbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f497c61d3f15a96409aaeaf2215f2c44bdef00d4afd2b0c1f74c2b0bef9c9bbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f497c61d3f15a96409aaeaf2215f2c44bdef00d4afd2b0c1f74c2b0bef9c9bbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:15 compute-0 podman[491694]: 2025-12-03 19:27:15.356027253 +0000 UTC m=+0.250125151 container init 6c6d053d5bfa3049a34342fa4d853e1bf44d08930f40817e53e6f0b70ab3a738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:27:15 compute-0 podman[491694]: 2025-12-03 19:27:15.381187073 +0000 UTC m=+0.275284901 container start 6c6d053d5bfa3049a34342fa4d853e1bf44d08930f40817e53e6f0b70ab3a738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 19:27:15 compute-0 podman[491694]: 2025-12-03 19:27:15.386999694 +0000 UTC m=+0.281097592 container attach 6c6d053d5bfa3049a34342fa4d853e1bf44d08930f40817e53e6f0b70ab3a738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 19:27:16 compute-0 gallant_nash[491712]: {
Dec  3 19:27:16 compute-0 gallant_nash[491712]:    "0": [
Dec  3 19:27:16 compute-0 gallant_nash[491712]:        {
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "devices": [
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "/dev/loop3"
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            ],
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_name": "ceph_lv0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_size": "21470642176",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=973fbbc8-5aff-4a53-bee8-42e5a6788dd6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "name": "ceph_lv0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "tags": {
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.block_uuid": "8iJKKR-2g52-YyBJ-oPBu-zYju-D0iG-KDGQe4",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.cluster_name": "ceph",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.crush_device_class": "",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.encrypted": "0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.osd_fsid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.osd_id": "0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.type": "block",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.vdo": "0"
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            },
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "type": "block",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "vg_name": "ceph_vg0"
Dec  3 19:27:16 compute-0 gallant_nash[491712]:        }
Dec  3 19:27:16 compute-0 gallant_nash[491712]:    ],
Dec  3 19:27:16 compute-0 gallant_nash[491712]:    "1": [
Dec  3 19:27:16 compute-0 gallant_nash[491712]:        {
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "devices": [
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "/dev/loop4"
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            ],
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_name": "ceph_lv1",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_size": "21470642176",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1e2b0083-5293-47cb-a3d1-bc27cedc4ede,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "name": "ceph_lv1",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "tags": {
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.block_uuid": "lm2les-wmSv-49fU-vUJX-d6pS-Fpdn-9jDUtw",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.cluster_name": "ceph",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.crush_device_class": "",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.encrypted": "0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.osd_fsid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.osd_id": "1",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.type": "block",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.vdo": "0"
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            },
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "type": "block",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "vg_name": "ceph_vg1"
Dec  3 19:27:16 compute-0 gallant_nash[491712]:        }
Dec  3 19:27:16 compute-0 gallant_nash[491712]:    ],
Dec  3 19:27:16 compute-0 gallant_nash[491712]:    "2": [
Dec  3 19:27:16 compute-0 gallant_nash[491712]:        {
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "devices": [
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "/dev/loop5"
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            ],
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_name": "ceph_lv2",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_size": "21470642176",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c1caf3ba-b2a5-5005-a11e-e955c344dccc,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2abec9de-afba-437e-9a17-384a1dd8cd50,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "lv_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "name": "ceph_lv2",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "tags": {
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.block_uuid": "5iiZE5-epYl-Pj38-GFGN-eahr-Bmtb-BREcf8",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.cephx_lockbox_secret": "",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.cluster_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.cluster_name": "ceph",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.crush_device_class": "",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.encrypted": "0",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.osd_fsid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.osd_id": "2",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.type": "block",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:                "ceph.vdo": "0"
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            },
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "type": "block",
Dec  3 19:27:16 compute-0 gallant_nash[491712]:            "vg_name": "ceph_vg2"
Dec  3 19:27:16 compute-0 gallant_nash[491712]:        }
Dec  3 19:27:16 compute-0 gallant_nash[491712]:    ]
Dec  3 19:27:16 compute-0 gallant_nash[491712]: }
Dec  3 19:27:16 compute-0 systemd[1]: libpod-6c6d053d5bfa3049a34342fa4d853e1bf44d08930f40817e53e6f0b70ab3a738.scope: Deactivated successfully.
Dec  3 19:27:16 compute-0 podman[491694]: 2025-12-03 19:27:16.295235902 +0000 UTC m=+1.189333740 container died 6c6d053d5bfa3049a34342fa4d853e1bf44d08930f40817e53e6f0b70ab3a738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 19:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f497c61d3f15a96409aaeaf2215f2c44bdef00d4afd2b0c1f74c2b0bef9c9bbd-merged.mount: Deactivated successfully.
Dec  3 19:27:16 compute-0 podman[491694]: 2025-12-03 19:27:16.392738498 +0000 UTC m=+1.286836306 container remove 6c6d053d5bfa3049a34342fa4d853e1bf44d08930f40817e53e6f0b70ab3a738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 19:27:16 compute-0 systemd[1]: libpod-conmon-6c6d053d5bfa3049a34342fa4d853e1bf44d08930f40817e53e6f0b70ab3a738.scope: Deactivated successfully.
Dec  3 19:27:16 compute-0 podman[491722]: 2025-12-03 19:27:16.441822589 +0000 UTC m=+0.099584548 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 19:27:16 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:17 compute-0 podman[491895]: 2025-12-03 19:27:17.411128118 +0000 UTC m=+0.081384236 container create 06385eaf3ae4940c6fbb5d48c00d91a79e6beb7f7bd89766d3d964422edd36de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sinoussi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Dec  3 19:27:17 compute-0 podman[491895]: 2025-12-03 19:27:17.385597659 +0000 UTC m=+0.055853757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:27:17 compute-0 systemd[1]: Started libpod-conmon-06385eaf3ae4940c6fbb5d48c00d91a79e6beb7f7bd89766d3d964422edd36de.scope.
Dec  3 19:27:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:27:17 compute-0 podman[491895]: 2025-12-03 19:27:17.559608931 +0000 UTC m=+0.229865119 container init 06385eaf3ae4940c6fbb5d48c00d91a79e6beb7f7bd89766d3d964422edd36de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:27:17 compute-0 podman[491895]: 2025-12-03 19:27:17.572320419 +0000 UTC m=+0.242576537 container start 06385eaf3ae4940c6fbb5d48c00d91a79e6beb7f7bd89766d3d964422edd36de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sinoussi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 19:27:17 compute-0 podman[491895]: 2025-12-03 19:27:17.579417092 +0000 UTC m=+0.249673210 container attach 06385eaf3ae4940c6fbb5d48c00d91a79e6beb7f7bd89766d3d964422edd36de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 19:27:17 compute-0 charming_sinoussi[491910]: 167 167
Dec  3 19:27:17 compute-0 systemd[1]: libpod-06385eaf3ae4940c6fbb5d48c00d91a79e6beb7f7bd89766d3d964422edd36de.scope: Deactivated successfully.
Dec  3 19:27:17 compute-0 podman[491895]: 2025-12-03 19:27:17.587122919 +0000 UTC m=+0.257379047 container died 06385eaf3ae4940c6fbb5d48c00d91a79e6beb7f7bd89766d3d964422edd36de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sinoussi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 19:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-719227cdbfeca9334b637e3c413330da873a3cafc856143c063ea325961759ce-merged.mount: Deactivated successfully.
Dec  3 19:27:17 compute-0 podman[491895]: 2025-12-03 19:27:17.638612268 +0000 UTC m=+0.308868346 container remove 06385eaf3ae4940c6fbb5d48c00d91a79e6beb7f7bd89766d3d964422edd36de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 19:27:17 compute-0 systemd[1]: libpod-conmon-06385eaf3ae4940c6fbb5d48c00d91a79e6beb7f7bd89766d3d964422edd36de.scope: Deactivated successfully.
Dec  3 19:27:17 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.771630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764790037771695, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1237, "num_deletes": 251, "total_data_size": 1891429, "memory_usage": 1922136, "flush_reason": "Manual Compaction"}
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764790037787063, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 1098454, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53358, "largest_seqno": 54594, "table_properties": {"data_size": 1093957, "index_size": 1956, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11757, "raw_average_key_size": 20, "raw_value_size": 1084219, "raw_average_value_size": 1908, "num_data_blocks": 89, "num_entries": 568, "num_filter_entries": 568, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764789909, "oldest_key_time": 1764789909, "file_creation_time": 1764790037, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 15493 microseconds, and 5126 cpu microseconds.
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.787123) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 1098454 bytes OK
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.787143) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.791695) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.791718) EVENT_LOG_v1 {"time_micros": 1764790037791710, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.791739) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 1885831, prev total WAL file size 1885831, number of live WAL files 2.
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.793841) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323533' seq:72057594037927935, type:22 .. '6D6772737461740032353035' seq:0, type:0; will stop at (end)
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(1072KB)], [128(9107KB)]
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764790037793888, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 10424683, "oldest_snapshot_seqno": -1}
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 6817 keys, 7886501 bytes, temperature: kUnknown
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764790037860754, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 7886501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7845943, "index_size": 22397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 178414, "raw_average_key_size": 26, "raw_value_size": 7727608, "raw_average_value_size": 1133, "num_data_blocks": 885, "num_entries": 6817, "num_filter_entries": 6817, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764784942, "oldest_key_time": 0, "file_creation_time": 1764790037, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1ac3b74-8599-4a51-8b4c-6fd35a134427", "db_session_id": "TYOLZSJOOVNJYKF8Y1CE", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.860946) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 7886501 bytes
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.863825) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.8 rd, 117.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(16.7) write-amplify(7.2) OK, records in: 7272, records dropped: 455 output_compression: NoCompression
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.863842) EVENT_LOG_v1 {"time_micros": 1764790037863834, "job": 78, "event": "compaction_finished", "compaction_time_micros": 66924, "compaction_time_cpu_micros": 21865, "output_level": 6, "num_output_files": 1, "total_output_size": 7886501, "num_input_records": 7272, "num_output_records": 6817, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764790037864115, "job": 78, "event": "table_file_deletion", "file_number": 130}
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764790037865418, "job": 78, "event": "table_file_deletion", "file_number": 128}
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.793653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.866235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.866244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.866248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.866252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:27:17 compute-0 ceph-mon[192802]: rocksdb: (Original Log Time 2025/12/03-19:27:17.866256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 19:27:17 compute-0 podman[491931]: 2025-12-03 19:27:17.876262765 +0000 UTC m=+0.059516006 container create e1e9783928ad7573b658267f1fa1ac34a68a76fbcab968711243c852b4ac4448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 19:27:17 compute-0 podman[491931]: 2025-12-03 19:27:17.851736129 +0000 UTC m=+0.034989400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 19:27:17 compute-0 systemd[1]: Started libpod-conmon-e1e9783928ad7573b658267f1fa1ac34a68a76fbcab968711243c852b4ac4448.scope.
Dec  3 19:27:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 19:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d400496b72955829d0b2ad429e35470ea12354fc27a5fee38514e73b7897b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d400496b72955829d0b2ad429e35470ea12354fc27a5fee38514e73b7897b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d400496b72955829d0b2ad429e35470ea12354fc27a5fee38514e73b7897b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d400496b72955829d0b2ad429e35470ea12354fc27a5fee38514e73b7897b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 19:27:18 compute-0 podman[491931]: 2025-12-03 19:27:18.022093323 +0000 UTC m=+0.205346564 container init e1e9783928ad7573b658267f1fa1ac34a68a76fbcab968711243c852b4ac4448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 19:27:18 compute-0 podman[491931]: 2025-12-03 19:27:18.039672929 +0000 UTC m=+0.222926180 container start e1e9783928ad7573b658267f1fa1ac34a68a76fbcab968711243c852b4ac4448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 19:27:18 compute-0 podman[491931]: 2025-12-03 19:27:18.045125272 +0000 UTC m=+0.228378503 container attach e1e9783928ad7573b658267f1fa1ac34a68a76fbcab968711243c852b4ac4448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 19:27:18 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:19 compute-0 nova_compute[348325]: 2025-12-03 19:27:19.164 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]: {
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:    "1e2b0083-5293-47cb-a3d1-bc27cedc4ede": {
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "osd_id": 1,
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "osd_uuid": "1e2b0083-5293-47cb-a3d1-bc27cedc4ede",
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "type": "bluestore"
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:    },
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:    "2abec9de-afba-437e-9a17-384a1dd8cd50": {
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "osd_id": 2,
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "osd_uuid": "2abec9de-afba-437e-9a17-384a1dd8cd50",
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "type": "bluestore"
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:    },
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:    "973fbbc8-5aff-4a53-bee8-42e5a6788dd6": {
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "ceph_fsid": "c1caf3ba-b2a5-5005-a11e-e955c344dccc",
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "osd_id": 0,
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "osd_uuid": "973fbbc8-5aff-4a53-bee8-42e5a6788dd6",
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:        "type": "bluestore"
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]:    }
Dec  3 19:27:19 compute-0 mystifying_elbakyan[491948]: }
Dec  3 19:27:19 compute-0 systemd[1]: libpod-e1e9783928ad7573b658267f1fa1ac34a68a76fbcab968711243c852b4ac4448.scope: Deactivated successfully.
Dec  3 19:27:19 compute-0 systemd[1]: libpod-e1e9783928ad7573b658267f1fa1ac34a68a76fbcab968711243c852b4ac4448.scope: Consumed 1.226s CPU time.
Dec  3 19:27:19 compute-0 podman[491931]: 2025-12-03 19:27:19.262730647 +0000 UTC m=+1.445983918 container died e1e9783928ad7573b658267f1fa1ac34a68a76fbcab968711243c852b4ac4448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 19:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-42d400496b72955829d0b2ad429e35470ea12354fc27a5fee38514e73b7897b4-merged.mount: Deactivated successfully.
Dec  3 19:27:19 compute-0 podman[491931]: 2025-12-03 19:27:19.36015342 +0000 UTC m=+1.543406651 container remove e1e9783928ad7573b658267f1fa1ac34a68a76fbcab968711243c852b4ac4448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 19:27:19 compute-0 systemd[1]: libpod-conmon-e1e9783928ad7573b658267f1fa1ac34a68a76fbcab968711243c852b4ac4448.scope: Deactivated successfully.
Dec  3 19:27:19 compute-0 nova_compute[348325]: 2025-12-03 19:27:19.419 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 19:27:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:27:19 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 19:27:19 compute-0 ceph-mon[192802]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:27:19 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev d0afa2dc-4333-4b03-ba9f-27f15b792073 does not exist
Dec  3 19:27:19 compute-0 ceph-mgr[193091]: [progress WARNING root] complete: ev 7c246b57-44f5-4043-98f0-89b8eef7169e does not exist
Dec  3 19:27:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:27:19 compute-0 ceph-mon[192802]: from='mgr.14130 192.168.122.100:0/428680145' entity='mgr.compute-0.etccde' 
Dec  3 19:27:20 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:22 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:22 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:22 compute-0 podman[492044]: 2025-12-03 19:27:22.990575339 +0000 UTC m=+0.140603461 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 19:27:23 compute-0 podman[492043]: 2025-12-03 19:27:23.045066832 +0000 UTC m=+0.195807362 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  3 19:27:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:27:23.393 286999 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 19:27:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:27:23.394 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 19:27:23 compute-0 ovn_metadata_agent[286994]: 2025-12-03 19:27:23.394 286999 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 19:27:24 compute-0 nova_compute[348325]: 2025-12-03 19:27:24.169 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:24 compute-0 nova_compute[348325]: 2025-12-03 19:27:24.423 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:24 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 19:27:25 compute-0 ceph-mgr[193091]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 19:27:26 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:27 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:28 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:29 compute-0 nova_compute[348325]: 2025-12-03 19:27:29.171 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:29 compute-0 nova_compute[348325]: 2025-12-03 19:27:29.426 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:29 compute-0 podman[158200]: time="2025-12-03T19:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:27:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:27:29 compute-0 podman[158200]: @ - - [03/Dec/2025:19:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8207 "" "Go-http-client/1.1"
Dec  3 19:27:30 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:31 compute-0 openstack_network_exporter[365222]: ERROR   19:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:27:31 compute-0 openstack_network_exporter[365222]: ERROR   19:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:27:31 compute-0 openstack_network_exporter[365222]: ERROR   19:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:27:31 compute-0 openstack_network_exporter[365222]: ERROR   19:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:27:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:27:31 compute-0 openstack_network_exporter[365222]: ERROR   19:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:27:31 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:27:32 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:32 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:32 compute-0 podman[492086]: 2025-12-03 19:27:32.940501119 +0000 UTC m=+0.094220548 container health_status 6b6179f2bc75659bb207f5f36d067e318fee1c293b78d5a9de36f81b211da9c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 19:27:32 compute-0 podman[492088]: 2025-12-03 19:27:32.955933643 +0000 UTC m=+0.092364782 container health_status d9a41e903b79f88212a19a73988bcaadef415b6829bc75d9d3f70fd2b33d9a7c (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Dec  3 19:27:32 compute-0 podman[492087]: 2025-12-03 19:27:32.980821366 +0000 UTC m=+0.124176093 container health_status c58cb626a5a752ff59d6aef05e036de730e19cbdce13149a7fe4c609e35595b9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 19:27:34 compute-0 nova_compute[348325]: 2025-12-03 19:27:34.176 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:34 compute-0 nova_compute[348325]: 2025-12-03 19:27:34.428 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:34 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:36 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 19:27:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4242824150' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 19:27:37 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 19:27:37 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4242824150' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 19:27:38 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:39 compute-0 podman[492155]: 2025-12-03 19:27:39.135098128 +0000 UTC m=+0.114267284 container health_status eddbae6a1db6d9d34f3d773ca5a62eb1a851ffd3aeb538cc68bba6796ca9920b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  3 19:27:39 compute-0 podman[492153]: 2025-12-03 19:27:39.145000218 +0000 UTC m=+0.137779354 container health_status 4926500e7b4992d91258254bfbb6d9c557abd61299f58f50bf1455db73861a24 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc.)
Dec  3 19:27:39 compute-0 systemd-logind[784]: New session 67 of user zuul.
Dec  3 19:27:39 compute-0 podman[492154]: 2025-12-03 19:27:39.179824003 +0000 UTC m=+0.167244260 container health_status 6d383c6deda511e13ac501517f2d7757428ab3d25d10a25a59d8212e10ca4adf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 19:27:39 compute-0 nova_compute[348325]: 2025-12-03 19:27:39.179 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:39 compute-0 systemd[1]: Started Session 67 of User zuul.
Dec  3 19:27:39 compute-0 nova_compute[348325]: 2025-12-03 19:27:39.430 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:39 compute-0 nova_compute[348325]: 2025-12-03 19:27:39.464 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:27:40 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:41 compute-0 nova_compute[348325]: 2025-12-03 19:27:41.478 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:27:42 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:42 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:42 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15887 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:43 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15889 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:27:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:27:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:27:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:27:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 19:27:44 compute-0 ceph-mgr[193091]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 19:27:44 compute-0 nova_compute[348325]: 2025-12-03 19:27:44.182 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:44 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  3 19:27:44 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2245475829' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  3 19:27:44 compute-0 nova_compute[348325]: 2025-12-03 19:27:44.432 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:44 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:46 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:46 compute-0 podman[492473]: 2025-12-03 19:27:46.990769321 +0000 UTC m=+0.143264508 container health_status dab10bb9e91342f5939fd9be1bd7e9f34e61a1ff672f0adb70b91f903a42df29 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 19:27:47 compute-0 nova_compute[348325]: 2025-12-03 19:27:47.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:27:47 compute-0 nova_compute[348325]: 2025-12-03 19:27:47.487 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:27:47 compute-0 ovs-vsctl[492523]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  3 19:27:47 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:48 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:48 compute-0 virtqemud[138705]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  3 19:27:48 compute-0 virtqemud[138705]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  3 19:27:48 compute-0 virtqemud[138705]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  3 19:27:49 compute-0 nova_compute[348325]: 2025-12-03 19:27:49.187 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:49 compute-0 nova_compute[348325]: 2025-12-03 19:27:49.434 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:49 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: cache status {prefix=cache status} (starting...)
Dec  3 19:27:49 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: client ls {prefix=client ls} (starting...)
Dec  3 19:27:49 compute-0 lvm[492846]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 19:27:49 compute-0 lvm[492846]: VG ceph_vg0 finished
Dec  3 19:27:50 compute-0 lvm[492871]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 19:27:50 compute-0 lvm[492871]: VG ceph_vg2 finished
Dec  3 19:27:50 compute-0 lvm[492912]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 19:27:50 compute-0 lvm[492912]: VG ceph_vg1 finished
Dec  3 19:27:50 compute-0 nova_compute[348325]: 2025-12-03 19:27:50.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:27:50 compute-0 nova_compute[348325]: 2025-12-03 19:27:50.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 19:27:50 compute-0 nova_compute[348325]: 2025-12-03 19:27:50.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 19:27:50 compute-0 nova_compute[348325]: 2025-12-03 19:27:50.506 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 19:27:50 compute-0 nova_compute[348325]: 2025-12-03 19:27:50.506 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:27:50 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: damage ls {prefix=damage ls} (starting...)
Dec  3 19:27:50 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump loads {prefix=dump loads} (starting...)
Dec  3 19:27:50 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15893 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:50 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:50 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  3 19:27:51 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  3 19:27:51 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  3 19:27:51 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  3 19:27:51 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15896 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Dec  3 19:27:51 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472491793' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  3 19:27:51 compute-0 nova_compute[348325]: 2025-12-03 19:27:51.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:27:51 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  3 19:27:51 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  3 19:27:51 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 19:27:51 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480463194' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 19:27:52 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: ops {prefix=ops} (starting...)
Dec  3 19:27:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec  3 19:27:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357766160' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  3 19:27:52 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15903 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:52 compute-0 ceph-mgr[193091]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 19:27:52 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T19:27:52.272+0000 7ff6bdbb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 19:27:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec  3 19:27:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2101684240' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  3 19:27:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:52 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec  3 19:27:52 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723730864' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  3 19:27:52 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: session ls {prefix=session ls} (starting...)
Dec  3 19:27:52 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:52 compute-0 ceph-mds[220747]: mds.cephfs.compute-0.oeacqo asok_command: status {prefix=status} (starting...)
Dec  3 19:27:53 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15913 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  3 19:27:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/173858262' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  3 19:27:53 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15915 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:53 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  3 19:27:53 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/665397859' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  3 19:27:53 compute-0 podman[493375]: 2025-12-03 19:27:53.950186476 +0000 UTC m=+0.115041252 container health_status ac3b4c61021e007dbfc8a120f33ca3a71e20ddaf5a741883e31d118de34330ad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  3 19:27:53 compute-0 podman[493373]: 2025-12-03 19:27:53.95447209 +0000 UTC m=+0.110953423 container health_status 9c1daa113d16b2c691faa3a29fa1c38fdc87166c93325919042e81c37d61c753 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 19:27:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Dec  3 19:27:54 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3133580987' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  3 19:27:54 compute-0 nova_compute[348325]: 2025-12-03 19:27:54.190 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  3 19:27:54 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433104519' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  3 19:27:54 compute-0 nova_compute[348325]: 2025-12-03 19:27:54.436 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec  3 19:27:54 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666108699' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  3 19:27:54 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 19:27:54 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1562255639' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 19:27:54 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:55 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15927 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:55 compute-0 ceph-mgr[193091]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 19:27:55 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T19:27:55.018+0000 7ff6bdbb5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 19:27:55 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  3 19:27:55 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/989411522' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  3 19:27:55 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec  3 19:27:55 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/819660596' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  3 19:27:55 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  3 19:27:55 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316220077' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  3 19:27:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec  3 19:27:56 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1835709761' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  3 19:27:56 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15937 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:56 compute-0 nova_compute[348325]: 2025-12-03 19:27:56.486 348329 DEBUG oslo_service.periodic_task [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 19:27:56 compute-0 nova_compute[348325]: 2025-12-03 19:27:56.487 348329 DEBUG nova.compute.manager [None req-6c3bffd1-78ac-4b5b-a3ff-3fc5f57224ca - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 19:27:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  3 19:27:56 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745769274' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  3 19:27:56 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15941 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:56 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:56 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15945 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:56 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  3 19:27:56 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2654948953' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 35028992 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 35020800 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 62.241371155s of 62.264389038s, submitted: 7
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92143616 unmapped: 34979840 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb8ec000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,1])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 34988032 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92176384 unmapped: 34947072 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb4dc000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb4dc000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb4dc000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb4dc000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930407 data_alloc: 218103808 data_used: 348160
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb4dc000/0x0/0x4ffc00000, data 0xc7068/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 34914304 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.915590286s of 24.443050385s, submitted: 90
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92250112 unmapped: 34873344 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 135 ms_handle_reset con 0x55ab9a795c00 session 0x55ab9bd685a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 34856960 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd37068/0xe02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92299264 unmapped: 34824192 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026554 data_alloc: 218103808 data_used: 356352
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9cf8c000 session 0x55ab9da2d2c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92372992 unmapped: 34750464 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92372992 unmapped: 34750464 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 34742272 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063256 data_alloc: 218103808 data_used: 364544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f2000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063416 data_alloc: 218103808 data_used: 368640
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 34734080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.379611969s of 47.708621979s, submitted: 41
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7ec400 session 0x55ab9e949680
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 27566080 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082460 data_alloc: 218103808 data_used: 7184384
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e8000 session 0x55ab9d7a85a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa3f3000/0x0/0x4ffc00000, data 0x11aa7b8/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 103309312 unmapped: 23814144 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b305400 session 0x55ab9e9643c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9a795c00 session 0x55ab9bb941e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9cf8c000 session 0x55ab9bba0f00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e8000 session 0x55ab9bba1860
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7ec400 session 0x55ab9d92fa40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 23617536 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9f4e000/0x0/0x4ffc00000, data 0x164f7b8/0x1720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 23617536 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9f4e000/0x0/0x4ffc00000, data 0x164f7b8/0x1720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 23617536 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e7c00 session 0x55ab9c06e000
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9a795c00 session 0x55ab9d2ead20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9cf8c000 session 0x55ab9d7c74a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e7c00 session 0x55ab9bba10e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 23617536 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132742 data_alloc: 234881024 data_used: 11837440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e8000 session 0x55ab9c01a1e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7ec400 session 0x55ab9bb983c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9a795c00 session 0x55ab9d2f2b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9cf8c000 session 0x55ab9bb94d20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e7c00 session 0x55ab9a7294a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e8000 session 0x55ab9d938f00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104161280 unmapped: 22962176 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 22945792 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9350000/0x0/0x4ffc00000, data 0x224c7c8/0x231e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9bfc4c00 session 0x55ab9e9454a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9e945680
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f4800 session 0x55ab9e945860
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9e945c20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 22372352 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0c00 session 0x55ab9e945e00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7ea800 session 0x55ab9b42a1e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9c07ba40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9d7663c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0c00 session 0x55ab9d767680
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f4800 session 0x55ab9e9445a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b303000 session 0x55ab9e944960
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9bba0f00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9bba1860
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e5f000/0x0/0x4ffc00000, data 0x273d7c8/0x280f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 22708224 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0c00 session 0x55ab9bba10e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f4800 session 0x55ab9bf95680
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9c00 session 0x55ab9c166f00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9b41c960
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9c01a1e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 21553152 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332327 data_alloc: 234881024 data_used: 12484608
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 21553152 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 106094592 unmapped: 21028864 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.299287796s of 12.889208794s, submitted: 68
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0800 session 0x55ab9d2ead20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b303800 session 0x55ab9e945860
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 106094592 unmapped: 21028864 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f2400 session 0x55ab9d7c1680
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d159800 session 0x55ab9d2f3a40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b303800 session 0x55ab9d939c20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 106094592 unmapped: 21028864 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9da5ba40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9c07a000
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8ca4000/0x0/0x4ffc00000, data 0x28f684d/0x29ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0800 session 0x55ab9b41c000
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104284160 unmapped: 22839296 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1290427 data_alloc: 234881024 data_used: 11845632
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f0800 session 0x55ab9d2ea3c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b303800 session 0x55ab9d7c61e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104644608 unmapped: 22478848 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9d446b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a8000 session 0x55ab9d767a40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22462464 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22462464 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 22208512 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8c7a000/0x0/0x4ffc00000, data 0x292084d/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 15384576 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389010 data_alloc: 234881024 data_used: 23777280
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 11640832 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119603200 unmapped: 7520256 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 7503872 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b52ac00 session 0x55ab9d7d1680
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.269133568s of 11.341886520s, submitted: 15
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f3000 session 0x55ab9bd4ef00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d159800 session 0x55ab9e945e00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a9000 session 0x55ab9d7a92c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 7503872 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9b303800 session 0x55ab9d09dc20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f97ed000/0x0/0x4ffc00000, data 0x1dad7eb/0x1e80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d1a8000 session 0x55ab9d7663c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277304 data_alloc: 234881024 data_used: 19943424
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277304 data_alloc: 234881024 data_used: 19943424
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277304 data_alloc: 234881024 data_used: 19943424
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277304 data_alloc: 234881024 data_used: 19943424
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 12042240 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f9817000/0x0/0x4ffc00000, data 0x1d837eb/0x1e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.212007523s of 18.411972046s, submitted: 33
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114434048 unmapped: 12689408 heap: 127123456 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9bfc7c00 session 0x55ab9d7c0b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f4800 session 0x55ab9bf94b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7e6000 session 0x55ab9cf46780
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9cd3e400 session 0x55ab9da73c20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 ms_handle_reset con 0x55ab9d7f5000 session 0x55ab9d92ed20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 16973824 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f92c8000/0x0/0x4ffc00000, data 0x22d37eb/0x23a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114376704 unmapped: 16949248 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 137 ms_handle_reset con 0x55ab9d7f5000 session 0x55ab9bd683c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114384896 unmapped: 16941056 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331373 data_alloc: 234881024 data_used: 19951616
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 137 ms_handle_reset con 0x55ab9bfc7c00 session 0x55ab9d7c1860
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 137 ms_handle_reset con 0x55ab9cd3e400 session 0x55ab9e944d20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114384896 unmapped: 16941056 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114384896 unmapped: 16941056 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 138 ms_handle_reset con 0x55ab9d7e6000 session 0x55ab9d7a85a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 16842752 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 16842752 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f92c0000/0x0/0x4ffc00000, data 0x22d6f39/0x23ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f92c0000/0x0/0x4ffc00000, data 0x22d6f39/0x23ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 16465920 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338125 data_alloc: 234881024 data_used: 19951616
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 138 ms_handle_reset con 0x55ab9d7f2c00 session 0x55ab9b4410e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f92c0000/0x0/0x4ffc00000, data 0x22d6f39/0x23ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 16457728 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 11526144 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 11526144 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.638536453s of 11.198720932s, submitted: 140
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 11460608 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 11419648 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419475 data_alloc: 234881024 data_used: 20987904
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 11370496 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 9920512 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 9920512 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 9904128 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452611 data_alloc: 234881024 data_used: 25559040
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452611 data_alloc: 234881024 data_used: 25559040
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 9879552 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 9846784 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 9846784 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121487360 unmapped: 9838592 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452611 data_alloc: 234881024 data_used: 25559040
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 9805824 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8a04000/0x0/0x4ffc00000, data 0x2b8a99c/0x2c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.850786209s of 18.968877792s, submitted: 30
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 12247040 heap: 131325952 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7e6c00 session 0x55ab9c01a1e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7e6000 session 0x55ab9d2ced20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9b303800 session 0x55ab9b138780
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d158400 session 0x55ab9d7c10e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9cf8c000 session 0x55ab9cf752c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1499789 data_alloc: 234881024 data_used: 25559040
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f83a0000/0x0/0x4ffc00000, data 0x31f69fe/0x32ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d158800 session 0x55ab9bb98b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7e9000 session 0x55ab9b440960
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 14868480 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7eb000 session 0x55ab9bb99680
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7e9c00 session 0x55ab9da2c3c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 14819328 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501936 data_alloc: 234881024 data_used: 25559040
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f839e000/0x0/0x4ffc00000, data 0x31f6a31/0x32d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 14802944 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 14802944 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f839e000/0x0/0x4ffc00000, data 0x31f6a31/0x32d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 14024704 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 122355712 unmapped: 12124160 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.693739891s of 12.235367775s, submitted: 47
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 122355712 unmapped: 12124160 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1524484 data_alloc: 251658240 data_used: 28762112
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f839e000/0x0/0x4ffc00000, data 0x31f6a31/0x32d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 122388480 unmapped: 12091392 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 122388480 unmapped: 12091392 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123691008 unmapped: 10788864 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126533632 unmapped: 7946240 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7913000/0x0/0x4ffc00000, data 0x3c7ba31/0x3d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619524 data_alloc: 251658240 data_used: 29175808
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127737856 unmapped: 6742016 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d7e4c00 session 0x55ab9d7a8f00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9cdaa000 session 0x55ab9e945c20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127787008 unmapped: 6692864 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 ms_handle_reset con 0x55ab9d158800 session 0x55ab9d7a94a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 6520832 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 6520832 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128958464 unmapped: 5521408 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f785d000/0x0/0x4ffc00000, data 0x3d29a31/0x3e03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1631582 data_alloc: 251658240 data_used: 29663232
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129318912 unmapped: 5160960 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.974658012s of 11.352388382s, submitted: 112
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7846000/0x0/0x4ffc00000, data 0x3d4ea31/0x3e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7846000/0x0/0x4ffc00000, data 0x3d4ea31/0x3e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7846000/0x0/0x4ffc00000, data 0x3d4ea31/0x3e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1623038 data_alloc: 251658240 data_used: 29671424
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 5267456 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 5259264 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 5259264 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 5259264 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1623038 data_alloc: 251658240 data_used: 29671424
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 5259264 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7846000/0x0/0x4ffc00000, data 0x3d4ea31/0x3e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.003317833s of 12.024734497s, submitted: 4
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7843000/0x0/0x4ffc00000, data 0x3d51a31/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1622906 data_alloc: 251658240 data_used: 29671424
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7843000/0x0/0x4ffc00000, data 0x3d51a31/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 5251072 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1624471 data_alloc: 251658240 data_used: 29671424
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129236992 unmapped: 5242880 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7843000/0x0/0x4ffc00000, data 0x3d51a31/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129236992 unmapped: 5242880 heap: 134479872 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f76eb000/0x0/0x4ffc00000, data 0x3ea8a54/0x3f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 21381120 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.003591537s of 10.582101822s, submitted: 117
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 22061056 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e7800 session 0x55ab9e72d0e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 22061056 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1732557 data_alloc: 251658240 data_used: 29777920
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 22052864 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f6aa1000/0x0/0x4ffc00000, data 0x4aef5f4/0x4bcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 22044672 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 22044672 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 22044672 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f6a80000/0x0/0x4ffc00000, data 0x4b115f4/0x4bee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 22044672 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1733549 data_alloc: 251658240 data_used: 29798400
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 22044672 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 22650880 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 22585344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128983040 unmapped: 22282240 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.304215431s of 10.410350800s, submitted: 18
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 22151168 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f6a70000/0x0/0x4ffc00000, data 0x4b215f4/0x4bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d525000 session 0x55ab9bb94b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc7800 session 0x55ab9bb941e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdaa000 session 0x55ab9e9441e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1741613 data_alloc: 251658240 data_used: 30396416
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 22151168 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9e9454a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129458176 unmapped: 21807104 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d525000 session 0x55ab9e944f00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132718592 unmapped: 18546688 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e9800 session 0x55ab9e944780
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e8400 session 0x55ab9e9445a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdaa000 session 0x55ab9bf94f00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9b41c960
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d525000 session 0x55ab9c2bbc20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e9800 session 0x55ab9a7294a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132038656 unmapped: 19226624 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132038656 unmapped: 19226624 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f6352000/0x0/0x4ffc00000, data 0x523e604/0x531c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1809372 data_alloc: 251658240 data_used: 31444992
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132038656 unmapped: 19226624 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132038656 unmapped: 19226624 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132038656 unmapped: 19226624 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f634f000/0x0/0x4ffc00000, data 0x5241604/0x531f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1809548 data_alloc: 251658240 data_used: 31444992
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.554804802s of 12.779651642s, submitted: 38
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f634f000/0x0/0x4ffc00000, data 0x5241604/0x531f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc4800 session 0x55ab9d2c4b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdaa000 session 0x55ab9d2c5860
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f634f000/0x0/0x4ffc00000, data 0x5241604/0x531f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9d2c45a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 19832832 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d525000 session 0x55ab9d2c5e00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1816052 data_alloc: 251658240 data_used: 31444992
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7f2c00 session 0x55ab9cf46d20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 19456000 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc7c00 session 0x55ab9c01a5a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f6324000/0x0/0x4ffc00000, data 0x526b627/0x534a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9bba1c20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 20799488 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f73fe000/0x0/0x4ffc00000, data 0x4191627/0x4270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 20799488 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131391488 unmapped: 19873792 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 17694720 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1726156 data_alloc: 251658240 data_used: 37060608
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 17694720 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 17694720 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 17694720 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.452157021s of 11.654810905s, submitted: 44
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f73fc000/0x0/0x4ffc00000, data 0x4192627/0x4271000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 17694720 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 17653760 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1726748 data_alloc: 251658240 data_used: 37068800
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 17653760 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7edc00 session 0x55ab9da73a40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7f6000 session 0x55ab9d2e1680
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f73f1000/0x0/0x4ffc00000, data 0x419e627/0x427d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7f6400 session 0x55ab9b41d2c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 19185664 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 19185664 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8037000/0x0/0x4ffc00000, data 0x3559592/0x3635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 19152896 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8037000/0x0/0x4ffc00000, data 0x3559592/0x3635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cd46c00 session 0x55ab9d7a8960
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdabc00 session 0x55ab9d2e0f00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132145152 unmapped: 19120128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9d2e0b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328035 data_alloc: 234881024 data_used: 19718144
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328035 data_alloc: 234881024 data_used: 19718144
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328035 data_alloc: 234881024 data_used: 19718144
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328035 data_alloc: 234881024 data_used: 19718144
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 27688960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.466674805s of 25.778623581s, submitted: 69
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x20f855f/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127623168 unmapped: 23642112 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399325 data_alloc: 234881024 data_used: 20193280
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bd2000/0x0/0x4ffc00000, data 0x29c255f/0x2a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bca000/0x0/0x4ffc00000, data 0x29ca55f/0x2aa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405631 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bca000/0x0/0x4ffc00000, data 0x29ca55f/0x2aa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 23699456 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bba000/0x0/0x4ffc00000, data 0x29da55f/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402495 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bba000/0x0/0x4ffc00000, data 0x29da55f/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bba000/0x0/0x4ffc00000, data 0x29da55f/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402495 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.735033035s of 20.114135742s, submitted: 83
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 24608768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 24567808 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402651 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 24567808 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402651 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402651 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.520511627s of 12.538968086s, submitted: 3
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 24535040 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 24535040 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 24535040 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 24559616 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403003 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 24551424 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cf8c800 session 0x55ab9d7a9a40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a8400 session 0x55ab9c16c3c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e8800 session 0x55ab9c16d2c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 24543232 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9c16de00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.986988068s of 30.994596481s, submitted: 2
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdabc00 session 0x55ab9bf94d20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cf8c800 session 0x55ab9d92f680
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d158800 session 0x55ab9d92ef00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a8400 session 0x55ab9b1db4a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdabc00 session 0x55ab9bb985a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1427586 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8924000/0x0/0x4ffc00000, data 0x2c6f56f/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8924000/0x0/0x4ffc00000, data 0x2c6f56f/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 25001984 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d524c00 session 0x55ab9d7c72c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430442 data_alloc: 234881024 data_used: 20193280
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 26238976 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 26230784 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 26230784 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 26230784 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 26230784 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 26230784 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125042688 unmapped: 26222592 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125050880 unmapped: 26214400 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 26206208 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 26206208 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 26206208 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 26206208 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450122 data_alloc: 234881024 data_used: 22892544
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 26198016 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8900000/0x0/0x4ffc00000, data 0x2c9356f/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 26189824 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 26189824 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 26189824 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 26189824 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.284614563s of 43.411251068s, submitted: 16
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516584 data_alloc: 234881024 data_used: 23195648
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 23126016 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80e2000/0x0/0x4ffc00000, data 0x34b156f/0x358c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128163840 unmapped: 23101440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80bb000/0x0/0x4ffc00000, data 0x34d856f/0x35b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526666 data_alloc: 234881024 data_used: 23326720
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80bb000/0x0/0x4ffc00000, data 0x34d856f/0x35b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80bb000/0x0/0x4ffc00000, data 0x34d856f/0x35b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b8000/0x0/0x4ffc00000, data 0x34db56f/0x35b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525850 data_alloc: 234881024 data_used: 23326720
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.226646423s of 13.443584442s, submitted: 62
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b8000/0x0/0x4ffc00000, data 0x34db56f/0x35b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526078 data_alloc: 234881024 data_used: 23326720
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x34dc56f/0x35b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525550 data_alloc: 234881024 data_used: 23326720
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x34dc56f/0x35b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x34dc56f/0x35b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 24125440 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cd45c00 session 0x55ab9d766b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9d766000
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b302000 session 0x55ab9d7672c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b302000 session 0x55ab9da721e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.944172859s of 11.963119507s, submitted: 2
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cd45c00 session 0x55ab9d2ce000
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdabc00 session 0x55ab9d7c70e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a9800 session 0x55ab9d446d20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1547706 data_alloc: 234881024 data_used: 23326720
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d524c00 session 0x55ab9d766d20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d524c00 session 0x55ab9d7a8780
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x372e56f/0x3809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1547706 data_alloc: 234881024 data_used: 23326720
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 24051712 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x372e56f/0x3809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 24600576 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d525000 session 0x55ab9da732c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 24600576 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 24600576 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 24567808 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1555406 data_alloc: 234881024 data_used: 23949312
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 24567808 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 23609344 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 23601152 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127672320 unmapped: 23592960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127672320 unmapped: 23592960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127672320 unmapped: 23592960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127672320 unmapped: 23592960 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566766 data_alloc: 234881024 data_used: 25563136
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x375256f/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 23584768 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127688704 unmapped: 23576576 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127688704 unmapped: 23576576 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.070636749s of 44.145889282s, submitted: 9
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131506176 unmapped: 19759104 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1653670 data_alloc: 234881024 data_used: 26386432
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131547136 unmapped: 19718144 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f738d000/0x0/0x4ffc00000, data 0x420656f/0x42e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f730f000/0x0/0x4ffc00000, data 0x428456f/0x435f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661948 data_alloc: 234881024 data_used: 26673152
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131121152 unmapped: 20144128 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 20013056 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f72ef000/0x0/0x4ffc00000, data 0x42a456f/0x437f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 20013056 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f72ef000/0x0/0x4ffc00000, data 0x42a456f/0x437f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 20013056 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d1a8800 session 0x55ab9b41f860
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7f7800 session 0x55ab9d7a83c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.831089020s of 11.189863205s, submitted: 67
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1539642 data_alloc: 234881024 data_used: 23437312
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7ee400 session 0x55ab9c01b2c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 23388160 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x34dc56f/0x35b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 23388160 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 23388160 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 23388160 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x34dc56f/0x35b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 23388160 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e7400 session 0x55ab9d7c6b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b76f400 session 0x55ab9bb94b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533102 data_alloc: 234881024 data_used: 23326720
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 23371776 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d159000 session 0x55ab9d7c01e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420910 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb7000/0x0/0x4ffc00000, data 0x29dd55f/0x2ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 24231936 heap: 151265280 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.699962616s of 49.828170776s, submitted: 34
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 29736960 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7ed400 session 0x55ab9da73a40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc2400 session 0x55ab9da723c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b76f400 session 0x55ab9da730e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc2400 session 0x55ab9d92f680
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d159000 session 0x55ab9d92fc20
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1478296 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 29736960 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 29736960 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 29736960 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e4c00 session 0x55ab9d92e960
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 29736960 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7f4000 session 0x55ab9d92e3c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 29720576 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b76f400 session 0x55ab9d92f4a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9bfc2400 session 0x55ab9d92e000
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1478296 data_alloc: 234881024 data_used: 20189184
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 29720576 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 29720576 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 29679616 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 128827392 unmapped: 28753920 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 9455 writes, 37K keys, 9455 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9455 writes, 2473 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2028 writes, 7848 keys, 2028 commit groups, 1.0 writes per commit group, ingest: 8.51 MB, 0.01 MB/s#012Interval WAL: 2028 writes, 827 syncs, 2.45 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 27467776 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130121728 unmapped: 27459584 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: mgrc ms_handle_reset ms_handle_reset con 0x55ab9ce08400
Dec  3 19:27:57 compute-0 ceph-osd[208881]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/817799961
Dec  3 19:27:57 compute-0 ceph-osd[208881]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/817799961,v1:192.168.122.100:6801/817799961]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: mgrc handle_mgr_configure stats_period=5
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 27262976 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 27230208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 27230208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 27230208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529016 data_alloc: 251658240 data_used: 27299840
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f84be000/0x0/0x4ffc00000, data 0x30d655f/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 27197440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.604251862s of 40.679466248s, submitted: 12
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1602660 data_alloc: 251658240 data_used: 27291648
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 22503424 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 22503424 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7bb0000/0x0/0x4ffc00000, data 0x39d655f/0x3ab0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1611264 data_alloc: 251658240 data_used: 28352512
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b89000/0x0/0x4ffc00000, data 0x3a0355f/0x3add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1605304 data_alloc: 251658240 data_used: 28352512
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1605304 data_alloc: 251658240 data_used: 28352512
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 24723456 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1605304 data_alloc: 251658240 data_used: 28352512
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.374584198s of 23.755279541s, submitted: 109
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1606392 data_alloc: 251658240 data_used: 28688384
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b8f000/0x0/0x4ffc00000, data 0x3a0555f/0x3adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 24707072 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607012 data_alloc: 251658240 data_used: 28688384
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607188 data_alloc: 251658240 data_used: 28688384
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.286112785s of 14.325413704s, submitted: 6
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607508 data_alloc: 251658240 data_used: 28696576
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 24592384 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3a1c55f/0x3af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 132997120 unmapped: 24584192 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1608276 data_alloc: 251658240 data_used: 28696576
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607752 data_alloc: 251658240 data_used: 28696576
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b63000/0x0/0x4ffc00000, data 0x3a3155f/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b63000/0x0/0x4ffc00000, data 0x3a3155f/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b63000/0x0/0x4ffc00000, data 0x3a3155f/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607752 data_alloc: 251658240 data_used: 28696576
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b63000/0x0/0x4ffc00000, data 0x3a3155f/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 24502272 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 24494080 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607752 data_alloc: 251658240 data_used: 28696576
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 24494080 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 24494080 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b305c00 session 0x55ab9bba12c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b63000/0x0/0x4ffc00000, data 0x3a3155f/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 24494080 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.582841873s of 25.602600098s, submitted: 3
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609996 data_alloc: 251658240 data_used: 28696576
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 24485888 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 24453120 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 24453120 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 24444928 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 24379392 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.599219322s of 10.092069626s, submitted: 112
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 24338432 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 24330240 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 24322048 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 24313856 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 24305664 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 24297472 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 24289280 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 24281088 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 24272896 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 24264704 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133324800 unmapped: 24256512 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133332992 unmapped: 24248320 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 24240128 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133349376 unmapped: 24231936 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133349376 unmapped: 24231936 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 24223744 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 24215552 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 24215552 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 24215552 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 24215552 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 251658240 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 24207360 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 24199168 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 24190976 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 24190976 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 24182784 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1609148 data_alloc: 234881024 data_used: 28733440
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610268 data_alloc: 234881024 data_used: 28827648
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 234.357208252s of 234.392181396s, submitted: 8
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b62000/0x0/0x4ffc00000, data 0x3a3255f/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b60000/0x0/0x4ffc00000, data 0x3a3455f/0x3b0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 24174592 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610452 data_alloc: 234881024 data_used: 28827648
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b60000/0x0/0x4ffc00000, data 0x3a3455f/0x3b0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610452 data_alloc: 234881024 data_used: 28827648
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b60000/0x0/0x4ffc00000, data 0x3a3455f/0x3b0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610612 data_alloc: 234881024 data_used: 28831744
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b60000/0x0/0x4ffc00000, data 0x3a3455f/0x3b0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.863843918s of 15.872858047s, submitted: 1
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 24166400 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610856 data_alloc: 234881024 data_used: 28831744
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610856 data_alloc: 234881024 data_used: 28831744
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 24158208 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 24150016 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 24150016 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1610856 data_alloc: 234881024 data_used: 28831744
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 24150016 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.326219559s of 14.335576057s, submitted: 1
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 24125440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 24125440 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613304 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613304 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613304 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.338993073s of 15.360601425s, submitted: 14
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 24117248 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 24084480 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 24076288 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 24068096 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 24059904 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133529600 unmapped: 24051712 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133529600 unmapped: 24051712 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133529600 unmapped: 24051712 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133529600 unmapped: 24051712 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133529600 unmapped: 24051712 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 24043520 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 24043520 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 24043520 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 24043520 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 234881024 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 24035328 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133554176 unmapped: 24027136 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133562368 unmapped: 24018944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133562368 unmapped: 24018944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133562368 unmapped: 24018944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133562368 unmapped: 24018944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133562368 unmapped: 24018944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133570560 unmapped: 24010752 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 24002560 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 23994368 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 23986176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 23977984 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 23969792 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 23969792 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 23969792 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 23969792 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 23961600 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 23961600 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 23961600 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 23953408 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 23945216 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133644288 unmapped: 23937024 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 23928832 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 23920640 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 23912448 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1613480 data_alloc: 218103808 data_used: 28819456
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 202.922653198s of 202.928924561s, submitted: 1
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7e9800 session 0x55ab9e945e00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9cdaa000 session 0x55ab9d7a81e0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 23904256 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7b5f000/0x0/0x4ffc00000, data 0x3a3555f/0x3b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7eb400 session 0x55ab9d7c14a0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429274 data_alloc: 218103808 data_used: 20819968
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29dc52c/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29dc52c/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429274 data_alloc: 218103808 data_used: 20819968
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29dc52c/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429274 data_alloc: 218103808 data_used: 20819968
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29dc52c/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429274 data_alloc: 218103808 data_used: 20819968
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9b14d400 session 0x55ab9d446b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9ce08c00 session 0x55ab9d7c0780
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 27090944 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.876663208s of 19.201396942s, submitted: 51
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8bb9000/0x0/0x4ffc00000, data 0x29dc52c/0x2ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 ms_handle_reset con 0x55ab9d7eb000 session 0x55ab9e944000
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2709 syncs, 3.69 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 551 writes, 1719 keys, 551 commit groups, 1.0 writes per commit group, ingest: 1.95 MB, 0.00 MB/s#012Interval WAL: 551 writes, 236 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252524 data_alloc: 218103808 data_used: 12570624
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252524 data_alloc: 218103808 data_used: 12570624
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.773575783s of 13.811752319s, submitted: 9
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x19b152c/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251898 data_alloc: 218103808 data_used: 12566528
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33726464 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 141 ms_handle_reset con 0x55ab9db14000 session 0x55ab9c16c3c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118235136 unmapped: 39346176 heap: 157581312 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118243328 unmapped: 47734784 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 142 ms_handle_reset con 0x55ab9d1a8800 session 0x55ab9d7a92c0
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118267904 unmapped: 47710208 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9bdf000/0x0/0x4ffc00000, data 0x19b4c73/0x1a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118267904 unmapped: 47710208 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241395 data_alloc: 218103808 data_used: 4730880
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 143 ms_handle_reset con 0x55ab9d7f3000 session 0x55ab9e944b40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 47677440 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 47677440 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9fc9000/0x0/0x4ffc00000, data 0x11b82b0/0x1293000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190873 data_alloc: 218103808 data_used: 4730880
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 47661056 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.670616150s of 14.250964165s, submitted: 98
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193175 data_alloc: 218103808 data_used: 4730880
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193175 data_alloc: 218103808 data_used: 4730880
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193175 data_alloc: 218103808 data_used: 4730880
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 46604288 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193335 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 66.491020203s of 66.518074036s, submitted: 14
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 46587904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118423552 unmapped: 47554560 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118480896 unmapped: 47497216 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 47480832 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118611968 unmapped: 47366144 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'config diff' '{prefix=config diff}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'config show' '{prefix=config show}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'counter dump' '{prefix=counter dump}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'counter schema' '{prefix=counter schema}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118489088 unmapped: 47489024 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118546432 unmapped: 47431680 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'log dump' '{prefix=log dump}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 129605632 unmapped: 36372480 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'perf dump' '{prefix=perf dump}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'perf schema' '{prefix=perf schema}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118243328 unmapped: 47734784 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 47611904 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118374400 unmapped: 47603712 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118374400 unmapped: 47603712 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118374400 unmapped: 47603712 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118374400 unmapped: 47603712 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118374400 unmapped: 47603712 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: mgrc ms_handle_reset ms_handle_reset con 0x55ab9cd46c00
Dec  3 19:27:57 compute-0 ceph-osd[208881]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/817799961
Dec  3 19:27:57 compute-0 ceph-osd[208881]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/817799961,v1:192.168.122.100:6801/817799961]
Dec  3 19:27:57 compute-0 ceph-osd[208881]: mgrc handle_mgr_configure stats_period=5
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 ms_handle_reset con 0x55ab9d7e7c00 session 0x55ab9c06fa40
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 47595520 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 48357376 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 48349184 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 48340992 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 48332800 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 48324608 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 48316416 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 48308224 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2882 syncs, 3.60 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 375 writes, 874 keys, 375 commit groups, 1.0 writes per commit group, ingest: 0.36 MB, 0.00 MB/s#012Interval WAL: 375 writes, 173 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 48300032 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 48291840 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 48283648 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 48283648 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 48283648 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 48283648 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 48283648 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 48283648 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 48283648 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 48283648 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 48275456 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 48275456 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 48275456 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 48275456 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 48275456 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 48275456 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 48275456 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 48275456 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 48267264 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 48259072 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 48259072 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 48259072 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 48259072 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 48259072 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 48259072 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 48259072 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 48250880 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 48250880 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 48250880 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 48250880 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 48250880 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 48250880 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 48250880 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 48250880 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 48242688 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.620849609s of 600.344360352s, submitted: 90
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 48234496 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 48201728 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 48144384 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 48136192 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 48087040 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 48078848 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48070656 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 48062464 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 48054272 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:27:57 compute-0 ceph-osd[208881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:27:57 compute-0 ceph-osd[208881]: bluestore.MempoolThread(0x55ab99a23b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192455 data_alloc: 218103808 data_used: 4734976
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 48046080 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'config diff' '{prefix=config diff}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'config show' '{prefix=config show}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118120448 unmapped: 47857664 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'counter dump' '{prefix=counter dump}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'counter schema' '{prefix=counter schema}'
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118194176 unmapped: 47783936 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 47923200 heap: 165978112 old mem: 2845415832 new mem: 2845415832
Dec  3 19:27:57 compute-0 ceph-osd[208881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9fc8000/0x0/0x4ffc00000, data 0x11b9d13/0x1296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  3 19:27:57 compute-0 ceph-osd[208881]: do_command 'log dump' '{prefix=log dump}'
Dec  3 19:27:57 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15947 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  3 19:27:57 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1570270284' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  3 19:27:57 compute-0 rsyslogd[188590]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 19:27:57 compute-0 ceph-mon[192802]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 19:27:57 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15951 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 19:27:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/445425917' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 19:27:58 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15955 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:58 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  3 19:27:58 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770423855' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  3 19:27:58 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15959 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 19:27:58 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:27:59 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec  3 19:27:59 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2197086972' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  3 19:27:59 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15963 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 19:27:59 compute-0 nova_compute[348325]: 2025-12-03 19:27:59.193 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:59 compute-0 nova_compute[348325]: 2025-12-03 19:27:59.438 348329 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 19:27:59 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15967 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 19:27:59 compute-0 podman[158200]: time="2025-12-03T19:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 19:27:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42578 "" "Go-http-client/1.1"
Dec  3 19:27:59 compute-0 podman[158200]: @ - - [03/Dec/2025:19:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8201 "" "Go-http-client/1.1"
Dec  3 19:28:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec  3 19:28:00 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275958104' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  3 19:28:00 compute-0 ceph-mgr[193091]: log_channel(audit) log [DBG] : from='client.15973 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 19:28:00 compute-0 ceph-mgr[193091]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 19:28:00 compute-0 ceph-c1caf3ba-b2a5-5005-a11e-e955c344dccc-mgr-compute-0-etccde[193087]: 2025-12-03T19:28:00.449+0000 7ff6bdbb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 19:28:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec  3 19:28:00 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1202109695' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  3 19:28:00 compute-0 ceph-mgr[193091]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  3 19:28:00 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec  3 19:28:00 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183249886' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  3 19:28:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec  3 19:28:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512343537' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  3 19:28:01 compute-0 openstack_network_exporter[365222]: ERROR   19:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:28:01 compute-0 openstack_network_exporter[365222]: ERROR   19:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 19:28:01 compute-0 openstack_network_exporter[365222]: ERROR   19:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 19:28:01 compute-0 openstack_network_exporter[365222]: ERROR   19:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 19:28:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:28:01 compute-0 openstack_network_exporter[365222]: ERROR   19:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 19:28:01 compute-0 openstack_network_exporter[365222]: 
Dec  3 19:28:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec  3 19:28:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1180235897' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  3 19:28:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec  3 19:28:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157182787' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  3 19:28:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec  3 19:28:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4234405937' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  3 19:28:01 compute-0 ceph-mon[192802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec  3 19:28:01 compute-0 ceph-mon[192802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1980890627' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 44212224 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 62.564529419s of 62.779438019s, submitted: 52
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 43147264 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 43147264 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103743488 unmapped: 43163648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 43098112 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c24c4/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029494 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 43040768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.952238083s of 24.430356979s, submitted: 90
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112263168 unmapped: 34643968 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086156 data_alloc: 218103808 data_used: 7090176
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 43016192 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa4e3000/0x0/0x4ffc00000, data 0x10c24d4/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 135 ms_handle_reset con 0x5562f72c1400 session 0x5562f88d6960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 43008000 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103915520 unmapped: 42991616 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f7fd1800 session 0x5562f9b0be00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103915520 unmapped: 42991616 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9cdb000/0x0/0x4ffc00000, data 0x18c5bce/0x1991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148648 data_alloc: 218103808 data_used: 7098368
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103931904 unmapped: 42975232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562f9fb05a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562f9fb0d20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f8c77800 session 0x5562f9fb0b40
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 103948288 unmapped: 42958848 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562f9322d20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fa58d400 session 0x5562f93230e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35127296 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562f811e3c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.107105255s of 49.211765289s, submitted: 6
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562f9190f00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 35463168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f8c77800 session 0x5562fa4c25a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562fa4c21e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a56800 session 0x5562fa4c32c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562fa4c2f00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9750000/0x0/0x4ffc00000, data 0x1e51bde/0x1f1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 35463168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215068 data_alloc: 218103808 data_used: 13914112
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562fa4c23c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 35463168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f8c77800 session 0x5562fa4c2000
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 35463168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9750000/0x0/0x4ffc00000, data 0x1e51bde/0x1f1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562fa4c30e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a56400 session 0x5562fa4c2780
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df800 session 0x5562f87ef4a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112304128 unmapped: 34603008 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112304128 unmapped: 34603008 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df000 session 0x5562fa38fc20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562fa587e00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562fa587c20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112304128 unmapped: 34603008 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f8c76800 session 0x5562fa5870e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562fa586000
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df000 session 0x5562f7e9a780
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562f9323680
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326211 data_alloc: 218103808 data_used: 13918208
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df800 session 0x5562f8937c20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f8c77800 session 0x5562fa025e00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 34217984 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c1400 session 0x5562f72bed20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 32415744 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8500000/0x0/0x4ffc00000, data 0x309fbfd/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8500000/0x0/0x4ffc00000, data 0x309fbfd/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114532352 unmapped: 32374784 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df000 session 0x5562f9a5af00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562f9fafa40
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 114597888 unmapped: 32309248 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8500000/0x0/0x4ffc00000, data 0x309fbfd/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df800 session 0x5562f9190f00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.823292732s of 11.363385201s, submitted: 60
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 31481856 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562f811e3c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562f9fb05a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420580 data_alloc: 234881024 data_used: 19738624
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 31481856 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a56400 session 0x5562fa38fa40
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562fa5b10e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562f9b0be00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 33931264 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df800 session 0x5562f88d6960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562f7e9a5a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a56400 session 0x5562f8c86b40
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562f89372c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562f9322960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 34521088 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8a8c000/0x0/0x4ffc00000, data 0x2b13bfe/0x2be2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a39400 session 0x5562facf6960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 34521088 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562fa4c2d20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 34521088 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342282 data_alloc: 218103808 data_used: 14016512
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 34586624 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 34586624 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8a8b000/0x0/0x4ffc00000, data 0x2b13c0e/0x2be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 33595392 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8a8b000/0x0/0x4ffc00000, data 0x2b13c0e/0x2be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 29278208 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8a8b000/0x0/0x4ffc00000, data 0x2b13c0e/0x2be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121487360 unmapped: 25419776 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458282 data_alloc: 234881024 data_used: 30162944
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 121487360 unmapped: 25419776 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8a8b000/0x0/0x4ffc00000, data 0x2b13c0e/0x2be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.190856934s of 11.338185310s, submitted: 32
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a38400 session 0x5562fa4c2f00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562f8c863c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a39800 session 0x5562f7e0cd20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a56400 session 0x5562f88ec5a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 30924800 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562fa38fc20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a38400 session 0x5562f8c87860
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9afd000/0x0/0x4ffc00000, data 0x1aa4bde/0x1b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218056 data_alloc: 218103808 data_used: 14536704
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9afd000/0x0/0x4ffc00000, data 0x1aa4bde/0x1b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 30883840 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218056 data_alloc: 218103808 data_used: 14536704
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9afd000/0x0/0x4ffc00000, data 0x1aa4bde/0x1b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218056 data_alloc: 218103808 data_used: 14536704
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 30875648 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 30867456 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a39800 session 0x5562f9faf680
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562facf2000 session 0x5562fac96b40
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562fd0df400 session 0x5562f9faef00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f9afd000/0x0/0x4ffc00000, data 0x1aa4bde/0x1b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f72c0800 session 0x5562f9a86960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.093891144s of 18.368560791s, submitted: 65
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a38400 session 0x5562fa565c20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 ms_handle_reset con 0x5562f9a39800 session 0x5562f9fb1860
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 31842304 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250489 data_alloc: 218103808 data_used: 14536704
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 31842304 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 137 ms_handle_reset con 0x5562facf2000 session 0x5562f8c892c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 31760384 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 137 ms_handle_reset con 0x5562f9a39000 session 0x5562facff4a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 137 ms_handle_reset con 0x5562f9a39000 session 0x5562f8937c20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115154944 unmapped: 31752192 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f970e000/0x0/0x4ffc00000, data 0x1e8ebe0/0x1f5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 31817728 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 31793152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 138 ms_handle_reset con 0x5562f72c0800 session 0x5562fbef7680
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f970d000/0x0/0x4ffc00000, data 0x1e9038e/0x1f60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259892 data_alloc: 218103808 data_used: 14553088
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 138 ms_handle_reset con 0x5562f9a38400 session 0x5562fa5645a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 31784960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f970d000/0x0/0x4ffc00000, data 0x1e9038e/0x1f60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 138 ms_handle_reset con 0x5562f9a39800 session 0x5562fa5654a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 31776768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 138 ms_handle_reset con 0x5562facf2000 session 0x5562f9fb0960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 138 ms_handle_reset con 0x5562f72c0800 session 0x5562f9fb03c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 31727616 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f970c000/0x0/0x4ffc00000, data 0x1e903c1/0x1f62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 31727616 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 31055872 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f91ea000/0x0/0x4ffc00000, data 0x23b23c1/0x2484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.653357506s of 11.279190063s, submitted: 111
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304023 data_alloc: 218103808 data_used: 14639104
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115499008 unmapped: 31408128 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 30818304 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 30818304 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 30818304 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9159000/0x0/0x4ffc00000, data 0x243fe24/0x2513000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 30679040 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347987 data_alloc: 234881024 data_used: 18644992
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 30679040 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 30679040 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 31375360 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 31375360 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f913f000/0x0/0x4ffc00000, data 0x245be24/0x252f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 31375360 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345443 data_alloc: 234881024 data_used: 18644992
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 31367168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 31367168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 31367168 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.370025635s of 12.496864319s, submitted: 34
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115703808 unmapped: 31203328 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115712000 unmapped: 31195136 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345691 data_alloc: 234881024 data_used: 18644992
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2465e24/0x2539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115712000 unmapped: 31195136 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115712000 unmapped: 31195136 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115712000 unmapped: 31195136 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115712000 unmapped: 31195136 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a39800 session 0x5562f9322960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a38c00 session 0x5562f93223c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a38800 session 0x5562f9322d20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f8115400 session 0x5562f93230e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f72c0800 session 0x5562f7e0c000
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a38800 session 0x5562f7d470e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a38c00 session 0x5562f8c89680
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a39800 session 0x5562f8c8a000
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 31096832 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9b4a400 session 0x5562f811e780
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8e93000/0x0/0x4ffc00000, data 0x2706e34/0x27db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369291 data_alloc: 234881024 data_used: 18644992
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8e93000/0x0/0x4ffc00000, data 0x2706e34/0x27db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369291 data_alloc: 234881024 data_used: 18644992
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 31088640 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.266865730s of 14.365548134s, submitted: 11
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f72c0800 session 0x5562fa586000
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8e69000/0x0/0x4ffc00000, data 0x2730e34/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 30728192 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 30720000 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 30687232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372069 data_alloc: 234881024 data_used: 18657280
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 30687232 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 30162944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 30162944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8e67000/0x0/0x4ffc00000, data 0x2731e34/0x2806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 30162944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 30162944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391879 data_alloc: 234881024 data_used: 20267008
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 29458432 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 29458432 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.909446716s of 10.056637764s, submitted: 27
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f72c1400 session 0x5562f9fb0d20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562fd0df000 session 0x5562f8c87e00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 30121984 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 ms_handle_reset con 0x5562f9a39800 session 0x5562f9b0be00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 30121984 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 30777344 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419525 data_alloc: 234881024 data_used: 20275200
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 30777344 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420501 data_alloc: 234881024 data_used: 20344832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 30769152 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420501 data_alloc: 234881024 data_used: 20344832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420501 data_alloc: 234881024 data_used: 20344832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a8ee34/0x2b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 30760960 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420501 data_alloc: 234881024 data_used: 20344832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 30752768 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.711545944s of 24.771131516s, submitted: 23
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 30744576 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 36192256 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7bc2000/0x0/0x4ffc00000, data 0x39d6e44/0x3aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 36077568 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0000 session 0x5562f9fc4f00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1587372 data_alloc: 234881024 data_used: 21413888
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7743000/0x0/0x4ffc00000, data 0x3e519c1/0x3f28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1588000 data_alloc: 234881024 data_used: 21422080
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 36397056 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 36388864 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.799458504s of 11.224850655s, submitted: 62
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7743000/0x0/0x4ffc00000, data 0x3e519c1/0x3f28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 36315136 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7744000/0x0/0x4ffc00000, data 0x3e519c1/0x3f28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 36208640 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1592928 data_alloc: 234881024 data_used: 22020096
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 36208640 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7744000/0x0/0x4ffc00000, data 0x3e519c1/0x3f28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 36200448 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c0800 session 0x5562fa4c3e00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562fa4cdc20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a39800 session 0x5562f9322960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 36200448 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0000 session 0x5562f9a5a000
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fd0df000 session 0x5562f9a5b2c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 24469504 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c0800 session 0x5562f7d46f00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562f88ecb40
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679610 data_alloc: 234881024 data_used: 34021376
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7135000/0x0/0x4ffc00000, data 0x44629c1/0x4539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f70b5000/0x0/0x4ffc00000, data 0x44e29c1/0x45b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.434187889s of 11.577630043s, submitted: 17
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679862 data_alloc: 234881024 data_used: 34021376
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f70aa000/0x0/0x4ffc00000, data 0x44e99c1/0x45c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f70aa000/0x0/0x4ffc00000, data 0x44e99c1/0x45c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 24420352 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f70aa000/0x0/0x4ffc00000, data 0x44e99c1/0x45c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679744 data_alloc: 234881024 data_used: 34017280
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 24412160 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 24412160 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a39000 session 0x5562f9fb10e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a38400 session 0x5562fa38fa40
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 24403968 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a39800 session 0x5562fabd7680
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f70ae000/0x0/0x4ffc00000, data 0x44e99c1/0x45c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 25903104 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 129441792 unmapped: 25862144 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1620791 data_alloc: 234881024 data_used: 34242560
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130326528 unmapped: 24977408 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x3da992c/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x3da992c/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1640631 data_alloc: 251658240 data_used: 36900864
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x3da992c/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x3da992c/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x3da992c/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 23830528 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a38800 session 0x5562fa5874a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a38c00 session 0x5562f87ee1e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.382255554s of 18.596866608s, submitted: 43
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 131489792 unmapped: 23814144 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562f773fc20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 24985600 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 24985600 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f820a000/0x0/0x4ffc00000, data 0x339191c/0x3464000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1534721 data_alloc: 234881024 data_used: 34181120
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f99e4c00 session 0x5562fa38ef00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 24985600 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fd490400 session 0x5562fa38f4a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562fa38e000
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d3000/0x0/0x4ffc00000, data 0x2bc891c/0x2c9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1454986 data_alloc: 234881024 data_used: 32808960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1454986 data_alloc: 234881024 data_used: 32808960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1454986 data_alloc: 234881024 data_used: 32808960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f89d4000/0x0/0x4ffc00000, data 0x2bc890c/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 25108480 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1454986 data_alloc: 234881024 data_used: 32808960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.527589798s of 22.740203857s, submitted: 51
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 20209664 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134627328 unmapped: 20676608 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f75000/0x0/0x4ffc00000, data 0x362790c/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1541956 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362f90c/0x3701000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1542172 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.419891357s of 37.711841583s, submitted: 78
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1540412 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1540412 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1540412 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1540412 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1540412 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134742016 unmapped: 20561920 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.938188553s of 25.944890976s, submitted: 1
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f99e4c00 session 0x5562f9fae960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1561132 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1561132 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 20512768 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 20594688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 20586496 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 20578304 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565292 data_alloc: 234881024 data_used: 33914880
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 20570112 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.308761597s of 43.364856720s, submitted: 9
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 20078592 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7db3000/0x0/0x4ffc00000, data 0x37e990c/0x38bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135733248 unmapped: 19570688 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 19415040 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8e000/0x0/0x4ffc00000, data 0x3b0690c/0x3bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1600034 data_alloc: 234881024 data_used: 33996800
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8c000/0x0/0x4ffc00000, data 0x3b0890c/0x3bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1600050 data_alloc: 234881024 data_used: 33996800
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8c000/0x0/0x4ffc00000, data 0x3b0890c/0x3bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1600050 data_alloc: 234881024 data_used: 33996800
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8c000/0x0/0x4ffc00000, data 0x3b0890c/0x3bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1600050 data_alloc: 234881024 data_used: 33996800
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8c000/0x0/0x4ffc00000, data 0x3b0890c/0x3bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a8c000/0x0/0x4ffc00000, data 0x3b0890c/0x3bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1600050 data_alloc: 234881024 data_used: 33996800
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 19660800 heap: 155303936 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0800 session 0x5562f79efa40
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0c00 session 0x5562f88052c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb1000 session 0x5562fa4c3680
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb1000 session 0x5562fbef6960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.942050934s of 25.233228683s, submitted: 51
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562f88e8d20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136216576 unmapped: 22765568 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f99e4c00 session 0x5562fbef7e00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0800 session 0x5562f88ec5a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0c00 session 0x5562facfef00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0c00 session 0x5562fa3d8960
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7135000/0x0/0x4ffc00000, data 0x446597e/0x4539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7135000/0x0/0x4ffc00000, data 0x446597e/0x4539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1669625 data_alloc: 234881024 data_used: 33996800
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562f9faed20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7134000/0x0/0x4ffc00000, data 0x44659a1/0x453a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 22757376 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 22749184 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1670578 data_alloc: 234881024 data_used: 34000896
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 136855552 unmapped: 22126592 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 18882560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 15949824 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 15941632 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7132000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 15941632 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1739858 data_alloc: 251658240 data_used: 43806720
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 15941632 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 15941632 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7132000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7132000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 15933440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 15933440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.195671082s of 18.394033432s, submitted: 36
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143220736 unmapped: 15761408 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742098 data_alloc: 251658240 data_used: 43798528
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143220736 unmapped: 15761408 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143220736 unmapped: 15761408 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143253504 unmapped: 15728640 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143253504 unmapped: 15728640 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143253504 unmapped: 15728640 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742098 data_alloc: 251658240 data_used: 43798528
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143253504 unmapped: 15728640 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143253504 unmapped: 15728640 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742098 data_alloc: 251658240 data_used: 43798528
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 15720448 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742098 data_alloc: 251658240 data_used: 43798528
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742098 data_alloc: 251658240 data_used: 43798528
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 15712256 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143278080 unmapped: 15704064 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7133000/0x0/0x4ffc00000, data 0x44669a1/0x453b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143278080 unmapped: 15704064 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143278080 unmapped: 15704064 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 143278080 unmapped: 15704064 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1742578 data_alloc: 251658240 data_used: 43810816
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.829374313s of 25.854768753s, submitted: 13
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144539648 unmapped: 14442496 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f6e8f000/0x0/0x4ffc00000, data 0x470a9a1/0x47df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144596992 unmapped: 14385152 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144850944 unmapped: 14131200 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 14041088 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1776448 data_alloc: 251658240 data_used: 44314624
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f6e7e000/0x0/0x4ffc00000, data 0x471b9a1/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f6e7e000/0x0/0x4ffc00000, data 0x471b9a1/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1776448 data_alloc: 251658240 data_used: 44314624
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f6e7e000/0x0/0x4ffc00000, data 0x471b9a1/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 14032896 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.517509460s of 10.725372314s, submitted: 41
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f99e4c00 session 0x5562fbef72c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f6e7e000/0x0/0x4ffc00000, data 0x471b9a1/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139083776 unmapped: 19898368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0800 session 0x5562f7d465a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139083776 unmapped: 19898368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7a92000/0x0/0x4ffc00000, data 0x3b0990c/0x3bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139083776 unmapped: 19898368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139083776 unmapped: 19898368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1607874 data_alloc: 234881024 data_used: 33984512
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139083776 unmapped: 19898368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f9a38800 session 0x5562f88bc5a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139083776 unmapped: 19898368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562f8c87680
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560296 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560296 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560296 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560296 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560296 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560296 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560296 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560296 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f6b000/0x0/0x4ffc00000, data 0x363190c/0x3703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 20422656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560296 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f99e4c00 session 0x5562f88d70e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0800 session 0x5562f79ee000
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0c00 session 0x5562f8c861e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb1000 session 0x5562f7d46b40
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 138575872 unmapped: 20406272 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.994422913s of 50.226264954s, submitted: 58
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c1400 session 0x5562fa2df0e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f99e4c00 session 0x5562f87ee000
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0800 session 0x5562fa5b0d20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb0c00 session 0x5562fa5874a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb1400 session 0x5562fac963c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 19980288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 19980288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7907000/0x0/0x4ffc00000, data 0x388397e/0x3957000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 19980288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 19980288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1586713 data_alloc: 234881024 data_used: 33357824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 19980288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562fadb1400 session 0x5562fac96780
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f78dc000/0x0/0x4ffc00000, data 0x38ad9a1/0x3982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1597269 data_alloc: 234881024 data_used: 33976320
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2780 syncs, 3.72 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1893 writes, 6388 keys, 1893 commit groups, 1.0 writes per commit group, ingest: 5.65 MB, 0.01 MB/s#012Interval WAL: 1893 writes, 822 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f78dc000/0x0/0x4ffc00000, data 0x38ad9a1/0x3982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1608789 data_alloc: 251658240 data_used: 35598336
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f78dc000/0x0/0x4ffc00000, data 0x38ad9a1/0x3982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f78dc000/0x0/0x4ffc00000, data 0x38ad9a1/0x3982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c0000 session 0x5562fa4cc000
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1608789 data_alloc: 251658240 data_used: 35598336
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f78dc000/0x0/0x4ffc00000, data 0x38ad9a1/0x3982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f8c76000 session 0x5562fac965a0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 21487616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f8c76400 session 0x5562f7d7e3c0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f78dc000/0x0/0x4ffc00000, data 0x38ad9a1/0x3982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1608789 data_alloc: 251658240 data_used: 35598336
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f78dc000/0x0/0x4ffc00000, data 0x38ad9a1/0x3982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1608789 data_alloc: 251658240 data_used: 35598336
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f78dc000/0x0/0x4ffc00000, data 0x38ad9a1/0x3982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f78dc000/0x0/0x4ffc00000, data 0x38ad9a1/0x3982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1608789 data_alloc: 251658240 data_used: 35598336
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f78dc000/0x0/0x4ffc00000, data 0x38ad9a1/0x3982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1608789 data_alloc: 251658240 data_used: 35598336
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 21479424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.539932251s of 40.785640717s, submitted: 38
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141746176 unmapped: 17235968 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141811712 unmapped: 17170432 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 142049280 unmapped: 16932864 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f70a1000/0x0/0x4ffc00000, data 0x40e89a1/0x41bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1685625 data_alloc: 251658240 data_used: 36429824
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7068000/0x0/0x4ffc00000, data 0x41219a1/0x41f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1685049 data_alloc: 251658240 data_used: 36433920
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7058000/0x0/0x4ffc00000, data 0x41319a1/0x4206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7058000/0x0/0x4ffc00000, data 0x41319a1/0x4206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7058000/0x0/0x4ffc00000, data 0x41319a1/0x4206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1685049 data_alloc: 251658240 data_used: 36433920
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7058000/0x0/0x4ffc00000, data 0x41319a1/0x4206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1685049 data_alloc: 251658240 data_used: 36433920
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 18284544 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.866823196s of 20.168373108s, submitted: 76
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7058000/0x0/0x4ffc00000, data 0x41319a1/0x4206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140738560 unmapped: 18243584 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140738560 unmapped: 18243584 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140738560 unmapped: 18243584 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140738560 unmapped: 18243584 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1685205 data_alloc: 251658240 data_used: 36433920
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7055000/0x0/0x4ffc00000, data 0x41349a1/0x4209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7055000/0x0/0x4ffc00000, data 0x41349a1/0x4209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1687925 data_alloc: 251658240 data_used: 36696064
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7053000/0x0/0x4ffc00000, data 0x41369a1/0x420b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.775835991s of 12.799559593s, submitted: 3
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1687933 data_alloc: 251658240 data_used: 36696064
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7053000/0x0/0x4ffc00000, data 0x41369a1/0x420b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1687933 data_alloc: 251658240 data_used: 36696064
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 18161664 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 18153472 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 18153472 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 18153472 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1687933 data_alloc: 251658240 data_used: 36696064
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7053000/0x0/0x4ffc00000, data 0x41369a1/0x420b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 18153472 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7053000/0x0/0x4ffc00000, data 0x41369a1/0x420b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.709950447s of 12.717439651s, submitted: 1
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 18153472 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 18153472 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 18153472 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688001 data_alloc: 251658240 data_used: 36696064
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688001 data_alloc: 251658240 data_used: 36696064
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688001 data_alloc: 251658240 data_used: 36696064
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 18145280 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f8c77400 session 0x5562facbbc20
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140845056 unmapped: 18137088 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140845056 unmapped: 18137088 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688001 data_alloc: 251658240 data_used: 36696064
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140845056 unmapped: 18137088 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140845056 unmapped: 18137088 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140845056 unmapped: 18137088 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140845056 unmapped: 18137088 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140845056 unmapped: 18137088 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688001 data_alloc: 251658240 data_used: 36696064
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.483970642s of 23.490903854s, submitted: 1
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 18104320 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140886016 unmapped: 18096128 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140886016 unmapped: 18096128 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 140935168 unmapped: 18046976 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1687969 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 17981440 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 17973248 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 17973248 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 17973248 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 17965056 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 17956864 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 17956864 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 17956864 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 17956864 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 17956864 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 17956864 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 17956864 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 17956864 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 17956864 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 17956864 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 17948672 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 17940480 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 17932288 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 17924096 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 17915904 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 17915904 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 17915904 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 17915904 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 17907712 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 17899520 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 17891328 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141099008 unmapped: 17883136 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141099008 unmapped: 17883136 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141099008 unmapped: 17883136 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141099008 unmapped: 17883136 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 17874944 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 17866752 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 251658240 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 17866752 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 17866752 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 17866752 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 17866752 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 17866752 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 234881024 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 17866752 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 17866752 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 17866752 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 17866752 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 17866752 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 234881024 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 17858560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 17858560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 17858560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 17858560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 17858560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 234881024 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 17858560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 17858560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 17858560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 17858560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 17858560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 234881024 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 17858560 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 234881024 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 234881024 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1688145 data_alloc: 234881024 data_used: 36704256
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141131776 unmapped: 17850368 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141139968 unmapped: 17842176 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 232.786773682s of 233.406066895s, submitted: 111
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141180928 unmapped: 17801216 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141180928 unmapped: 17801216 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1690225 data_alloc: 234881024 data_used: 36896768
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141180928 unmapped: 17801216 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7052000/0x0/0x4ffc00000, data 0x41379a1/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141180928 unmapped: 17801216 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141180928 unmapped: 17801216 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f703b000/0x0/0x4ffc00000, data 0x414e9a1/0x4223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1690669 data_alloc: 234881024 data_used: 36896768
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f703b000/0x0/0x4ffc00000, data 0x414e9a1/0x4223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1690669 data_alloc: 234881024 data_used: 36896768
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f703b000/0x0/0x4ffc00000, data 0x414e9a1/0x4223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141369344 unmapped: 17612800 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691149 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141377536 unmapped: 17604608 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141377536 unmapped: 17604608 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141377536 unmapped: 17604608 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f703b000/0x0/0x4ffc00000, data 0x414e9a1/0x4223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141377536 unmapped: 17604608 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.440717697s of 21.466768265s, submitted: 3
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 17432576 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691569 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 17432576 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 17432576 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 17432576 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 17432576 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 17424384 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691569 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 17424384 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 17424384 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 17424384 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 17424384 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 17424384 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691569 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 17424384 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 17424384 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141565952 unmapped: 17416192 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.314403534s of 14.334332466s, submitted: 2
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141565952 unmapped: 17416192 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 17408000 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 17399808 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 17391616 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 17383424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 17383424 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 17375232 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 234881024 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 17367040 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 17358848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 17350656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 17350656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 17350656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 17350656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 17350656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 17350656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 17350656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 17350656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 17350656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 17350656 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141639680 unmapped: 17342464 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141639680 unmapped: 17342464 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141639680 unmapped: 17342464 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141639680 unmapped: 17342464 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141639680 unmapped: 17342464 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141639680 unmapped: 17342464 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141639680 unmapped: 17342464 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141647872 unmapped: 17334272 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141647872 unmapped: 17334272 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141647872 unmapped: 17334272 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141647872 unmapped: 17334272 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141647872 unmapped: 17334272 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141647872 unmapped: 17334272 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141647872 unmapped: 17334272 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 17326080 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 17317888 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141672448 unmapped: 17309696 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141672448 unmapped: 17309696 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141672448 unmapped: 17309696 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141672448 unmapped: 17309696 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141672448 unmapped: 17309696 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 17301504 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 17301504 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 17301504 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 17301504 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 17301504 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 17301504 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 17301504 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 17301504 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 17301504 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 17301504 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 17301504 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 17293312 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 17285120 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 17276928 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f72c0800 session 0x5562f9190f00
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691745 data_alloc: 218103808 data_used: 36909056
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 218.990341187s of 219.007904053s, submitted: 2
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x41639a1/0x4238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [1])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 17268736 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 ms_handle_reset con 0x5562f8c77400 session 0x5562f88861e0
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f811e000/0x0/0x4ffc00000, data 0x306b9a1/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f811e000/0x0/0x4ffc00000, data 0x306b9a1/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f811e000/0x0/0x4ffc00000, data 0x306b9a1/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1511335 data_alloc: 218103808 data_used: 29233152
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f811e000/0x0/0x4ffc00000, data 0x306b9a1/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 19:28:02 compute-0 ceph-osd[207851]: bluestore.MempoolThread(0x5562f65b1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1511335 data_alloc: 218103808 data_used: 29233152
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f811e000/0x0/0x4ffc00000, data 0x306b9a1/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 19:28:02 compute-0 ceph-osd[207851]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2961 syncs, 3.62 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 399 writes, 1169 keys, 399 commit groups, 1.0 writes per commit group, ingest: 1.12 MB, 0.00 MB/s#012Interval WAL: 399 writes, 181 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
Dec  3 19:28:02 compute-0 ceph-osd[207851]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 23502848 heap: 158982144 old mem: 2845415832 new mem: 2845415832
